PDF  PubReader

Hasan , Chowdhury , Shahjalal , Islam , and Jang: Optimum LED Coverage Utilization in OCC for Effective Communication with Mobile Robot

Moh. Khalid Hasan , Mostafa Zaman Chowdhury , Md. Shahjalal , Md. Mainul Islam and Yeong Min Jang

Optimum LED Coverage Utilization in OCC for Effective Communication with Mobile Robot

Abstract: Optical camera communication (OCC) is a promising technology for camera-mounted mobile robots (MRs), where it can use its camera to communicate with light-emitting diode (LED) infrastructures. Due to the limited angle of view of current commercial cameras used in MRs, the cameras cannot detect LED everywhere inside an area illuminated by an LED; hence, much of the illumination surface remains unutilized. Herein, we propose an automatic camera inflection algorithm to successfully detect LED anywhere inside an LED cell (the total coverage area illuminated by LED). The inflection mechanism is a supervised learning approach. The technique utilizes the coordinate information (i.e., X and Y coordinates) of an LED cell, which is further employed to track the position of the LED. The coordinate data is automatically updated when the MR travels from one position to another. Furthermore, we perform simulations that include a discussion on tilt angle variation with increasing distance traveled by the MR and a performance analysis that demonstrates the importance and feasibility of the developed algorithm. Finally, a testbed prototype for OCC is implemented to demonstrate the proof of angle measurement theory and bit-error-rate performance of the proposed system.

Keywords: bit-error rate (ber) , image sensor , light emitting diode (LED) , mobile robot (MR) , tilt angle

1. Introduction

SINCE current commercial light-emitting diodes (LEDs) are inexpensive and highly energy-efficient, their uses have been greatly increased in different indoor and outdoor scenarios and broadened the research possibilities of optical wireless communications (OWC), which involves using high-frequency pulses of LED light to transmit data[1]. OWC is useful for its high performance concerning data rate, security, and reliability; and its available spectrum can help to decrease the bandwidth burden on the limited spectrum of radio-frequency (RF) bands. As a congruent complementary option of RF, OWC is an excellent choice for future heterogeneous networks [2].

Optical camera communication (OCC) is a sub-system of OWC wherein a camera is used to receive luminous LED data [3].The unprecedented growth of camera-mounted smart devices is observed in recent times, which opens up significant possibilities for OCC to address new challenges in different areas, such as positioning of mobile robots (MRs) and smartphones [4], [5], localized advertising, digital signage, augmented reality (AR) 2] and vehicle-to-everything communications [6], [7].OCC requires little modification of existing hardware and is implemented by adding an embedded camera to receive optical signals [8]. Because of being regarded as a promising technology, IEEE has developed a task group (802.15.7m) to standardize OCC specifications [9]. OCC is an excellent solution for long-distance line-of-sight (LOS) communications and has high-performance characteristics, such as low interference, high security, excellent signal-to-noise ratio (SNR), and high stability with respect to varying communication distance. Conversely, the data rate using OCC is limited due to the limited sampling rate of current commercial cameras. For instance, data rates ranging from several bps to several kbps were achieved using a 30 fps camera [10]-[12]; however, high-speed cameras (e.g., 1000 fps [13]) can considerably overcome this problem. Moreover, despite being a directional LOS communication technology, OCC suffers from interference generated by background lights because the frequent formation of LED infrastructures has been noticed in current indoor environments. Direct interference from neighboring LEDs or their multiple reflections on walls and floors can be mitigated using the region-of-interest (RoI) signaling which requires complex image processing functionalities [9]

Camera-mounted MRs are very popular in current indoor environments. These cameras can be utilized to receive data from the LEDs. Since existing commercial cameras mounted on MRs offer an angle-of-view (AOV) smaller than that offered by commercial PDs, these cameras cannot communicate with the LEDs within the entire LED cell. OCC requires that all or at least some part of the LED must appear inside the AOV of the camera. When an MR travels within an LED cell, it is possible that its camera cannot communicate with the LED, even though it is inside the cell. This hampers the optimum utilization of the LED signal, resulting in the termination of the communication.

This study aims to ensure optimum optical signal reception using a camera image sensor (CIS) by utilizing the entire illumination of LED transmitter and dispelling interference generatedby neighboring LEDs. We propose a camera inflection method that aids to continue the reception of information bits inside the LED cell. Additionally, an LED position detection algorithm is proposed as well, which is utilized to track the direction of the LED. The MR is given a coordinate information based on its appearance inside the LED cell. This coordinate information is dynamically updated with the movement of the MR. Note that a rolling shutter camera is used to receive the databits from the LED. Due to the rolling shutter effect, dark and bright stripes are generated in the CIS. The width of each stripe and the number of stripes are analyzed to decode the OCC data. At the very moment when the LED almost disappears from the CIS, the number of stripes reaches a certain threshold below which data cannot be decoded. Consequently, the MR estimates the position of the LED and the tilt angle. The scheme is applicable to any OCCbased MR applications. However, the communication delay can be an important issue, which should be as low as possible considering the mobility of the MR. A channel model for OCC to measure the SNR and bit-error rate (BER) performance in our proposed scheme is also provided. We also develop a testbed for OCC where an android smartphone device is used to receive the transmitted bits from the LED. Rolling shutter frequency shift on-off keying (RS-FSOOK), an existing modulation scheme, is used to implement our proposed scheme.

The remaining of this paper is organized as follows. Section II comprises an overview of the channel model, SNR measurement, and the information-decoding principle of OCC. Section III explains the proposed algorithm. The performance of our proposed scheme is evaluated in Section IV, which includes simulation results and the testbed configuration for OCC. Finally, a summary and possible extension of our work are discussed in Section V.

II. SYSTEM OVERVIEW

A. Channel model

As shown in Fig. 1, an LED transmitter and a camera receiver are positioned at A and B, respectively. We have considered an indoor environment, where the room size is well above the size of the LED cell. We also consider the LED as circular, consequently giving the LED cell a spherical shape. The cells of the neighboring LEDs are assumed not to be overlapped with each other. The radiation pattern of the LEDs is affected by the asperity of the chip faces and the geometric conditions of the encapsulating lens. The luminous intensity model is a function of the angle of irradiance of the transmitted rays. The channel response for OCC is represented by the Lambertian radiant intensity [14],[15], which is expressed as

(1)
[TeX:] $$G_{a, b}=\left\{\begin{array}{ll} 0, & \alpha_{i n}>\beta_{A O V} \\ \frac{\left(m_{l}+1\right)}{2 \pi d_{a, b}^{2}} A_{c} g_{o p} \cos ^{m_{l}}\left(\alpha_{i r}\right) \cos \left(\alpha_{i n}\right), & \alpha_{i n} \leq \beta_{A O V} \end{array}\right.$$

where [TeX:] $$\alpha_{i r}$$ denotes the angle of irradiance of the LED, [TeX:] $$\alpha_{i n}$$ implies the corresponding angle of incidence, and gop represents the gain of the optical filter. ml is the Lambertian emission order, which is a function of the radiation angle θ1/2, at which the radiation intensity is half of that in the main-beam direction and presented as [TeX:] $$m_{l}=-\log _{\cos \left(\theta_{1 / 2}\right)}(2) \cdot d_{a, b}\left(=\sqrt{d_{a, h}^{2}}+d_{b, x}^{2}\right)$$ presented as ml = −logcos(θ1/2 ) (2). da,b = da,h + db,x is the Euclidean distance between the LED access point and camera receiver. Ac is the area of the LED projected on the CIS. As illustrated in Fig. 2, we consider a circular LED with a radius of [TeX:] $$a_{l}$$. If [TeX:] $$\rho$$ is the edge length of a pixel, Ac can be calculated in terms of pixels as follows

Fig. 1.

OCC channel model.
1.png

Fig. 2.

Image projection onto the CIS.
2.png

(2)
[TeX:] $$A_{c}=\frac{\pi a_{l}^{2} f_{o}^{2}}{\rho^{2} d_{a, b}^{2}}$$

where [TeX:] $$f_{o}$$ denotes the focal length.

The most effective means to increase the dc gain is to decrease [TeX:] $$d_{b, x}$$ Because when MR travels through the LED cell,[TeX:] $$d_{b, x}$$becomes the only variable parameter. Multipath fading and Doppler shift are negligible for an optical wireless channel as the optical signal frequency is considerably large compared to the rate of change of the impulse response [16].

The power that the signal carries to the CIS depends on the responsivity of the CIS, which is measured in Amperes per Watt and expressed as

(3)
[TeX:] $$R=\eta q_{e} \frac{\lambda}{h_{p l a n c k} c}$$

where [TeX:] $$\eta$$ denotes the quantum efficiency of the CIS, [TeX:] $$q_{e}$$ is the electron charge, [TeX:] $$h_{\text {planck}}$$ is Planck’s constant, [TeX:] $$c$$ is the speed of light, and [TeX:] $$\lambda$$ denotes the wavelength of the optical signal.

The noise power is also a function of [TeX:] $$R . \quad \mathrm{OCC}$$ is less affected by the LOS component of the optical signal of the neighboring LEDs for low AOV of the CIS, although non-line-ofsight (NLOS) components can contribute toward generating interference which can be reflected through transparent painted walls and glasses. However, this effect is minimized to a great extent via spatial separation of these components from the CIS [17], [18]. OCC channel distortion is another important parameter that contributes toward decreasing the overall SNR. The noise is actually found from the shot noise from background light sources and is modeled as additive white Gaussian noise (AWGN). The overall noise power is expressed as

(4)
[TeX:] $$P_{n}=q_{e} R p_{n} A_{c} f_{r}$$

where [TeX:] $$P_{n}$$ and [TeX:] $$f_{r}$$ denote the power in background light per unit area and the sampling rate of the camera, respectively.

Therefore, the overall SNR is given in (5), where [TeX:] $$P_{t}$$ denotes the overall transmitted optical power.

B. Operating principle

In this study, RS-FSOOK is used as the modulation scheme, which uses the undersampling principle [19], [20]. As the MR travels inside the LED cell, the distance from the LED can either increase or decrease. We chose this technique for our proposed scheme as it shows better performance when the distance is varied continuously. Here, two different frequencies are used to represent the ON and OFF states of the LED. Say [TeX:] $$f$$ is the set of corresponding frequencies which are used for modulation, then

[TeX:] $$f=\left\{\begin{array}{l} f_{1} \text { for bit } 1 \\ f_{2} \text { for bit } 0 \end{array}\right.$$

The camera captures many images at first. Then it utilizes a start frame delimiter [20] to detect the boundary of the transmitted symbol. Since we performed sampling with rolling shutter cameras, these two frequencies are responsible for stripe patterns having different stripe widths, which are particularly dark and bright [21], [22]. Information sent from the LED is decoded utilizing the stripe width of the CIS. The stripe pattern in the CIS mainly depends on

The size of the LED

Connecting distance from the LED to the camera and

The readout architecture of the CIS

The size of the LED indicates its radius if it is circular. Through several experiments, we found that the number of stripes decreased for LEDs with shorter diameters for the same distance between the LED and the camera. It is worth noting that since OCC is an intensity modulation based system, it is not necessary to image the entire LED into the CIS to extract the encoded bits. To decode any information, the received image must occupy more than one pixel. Hence, a certain portion of the LED is sufficient to continue communication between the LED and camera. However, as mentioned earlier, since the number of stripes depends on the size of the LED and if a certain portion of the LED appears in the CIS, the number of stripes is reduced as well. There is a limit for the number of stripes [TeX:] $$\left(n_{\min }\right)$$, below which the databits cannot be extracted.

(5)
[TeX:] $$S N R_{O C C}=\frac{R}{\pi q_{e} p_{n} f_{r}}\left[\frac{a_{l} f_{o} P_{t}\left(m_{l}+1\right)}{2 d_{a, b}^{3}} g_{o p} \cos ^{m_{l}}\left(\alpha_{i r}\right) \cos \left(\alpha_{i n}\right)\right]^{2}$$

One of the main factors that affects stripe patterns is the dis- tance from the LED to the camera. The variation of the distance with time greatly affects the received signal power. The received image occupies more pixels when the distance is reduced, which contributes toward increasing the number of stripes significantly. Similarly, the distance has a maximum value, beyond which the image occupies pixels fewer than [TeX:] $$n_{\min }$$. Fig. 3 shows an experimental demonstration of the stripe pattern variations of an LED at different distances using the RS-FSOOK scheme for an LED radius of 5 cm.

The camera readout architecture has a profound impact on the construction of stripes. Mathematically, the width of stripes is determined using the sequential readout architecture of the camera. If a camera with a total of [TeX:] $$n_{p}$$ pixels reads a pixel in [TeX:] $$T_{r}$$ seconds, and images an LED modulated by frequency [TeX:] $$f$$ one complete cycle of the transmitted signal is exposed in the image frames for [TeX:] $$1 / f$$ seconds. Hence, the width of a stripe is expressed in terms of pixel number in the following equation

(6)
[TeX:] $$S_{w}=\frac{1}{f T_{r}}$$

III. PROPOSED METHOD

A. Proposed algorithm

As the distance from the LED to the lens is much greater than the focal length and the image distance (distance between the lens and CIS surface), the focal length is considered to be equal to the image distance for the rightmost term. The AOV of the camera is measured either horizontally or vertically for a typical rectangular CIS with a non-unity aspect ratio. In this study, as shown in Fig. 4, we used diagonal AOV for simplicity, where L indicates the diagonal length of the CIS. With a CIS having dimensions of [TeX:] $$x \times y$$, the AOV of the camera is determined as

(7)
[TeX:] $$\beta_{A O V}=2 \tan ^{-1} \frac{L}{2 f_{o}}$$

here,[TeX:] $$L=\sqrt{x^{2}+y^{2}}$$

To provide optimum data transfer for OCC, the transmitter should at least be inside the AOV of the camera. While the MR moves away from the LED or toward it, at one point, it is not inside the coverage area. Although the MR is inside the LED cell,

Fig. 3.

Experimental illustration of the stripe pattern of RS-FSOOK scheme for distances of (a) 0.5m, (b) 1.5m, (c) 2m, and (d) 3m between the LED and CIS.
3.png

Fig. 4.

The total area of CIS coverage.
4.png

no optical signal is received. To deal with this problem, first, we need to find out the distance covered by the camera [TeX:] $$d_{A O V}$$ as shown in Fig. 4. Through geometric analysis, the distance can be determined as

(8)
[TeX:] $$d_{A O V}=2 d_{a, h} \tan \left(\frac{\beta_{A O V}}{2}\right)$$

If the MR is actually under the LED, the camera is positioned with a camera tilt angle [TeX:] $$\theta=0^{\circ}$$ and a horizontal distance [TeX:] $$d_{b, x}=$$0. The camera certainly receives data from the LED as long as[TeX:] $$d_{b, x}$$ does not exceed the maximum distance given by the camera AOV. In addition, if [TeX:] $$d_{b, x}$$ exceeds the maximum allowable value, the LED would no longer be projected onto the CIS; in this case, the camera is tilted in a certain angle for LED reappearance.

The direction of the movement of the MR is tracked based on the moving direction of the image of LED projected onto the CIS. The LED cell is assumed as a two-dimensional (2D) surface, with an origin at the center. Fig. 5 shows the LED cell; in this figure, Area-1 implies that no inflection of the camera is required, but when the MR enters Area-2, the camera is tilted at a certain angle in order to communicate with the LED. MR can start from any position in Area-1.

Recalling that if the MR was actually at the center of the LED cell, the camera would receive the maximum amount of power. The projected image of the LED occupies an area that also exactly is at the center of the CIS. Thus, concerning the coordinate information of the CIS, the LED occupies the origin in the LED cell, which we refer to as Position-1. If the MR travels to another position, called Position-2, which is also in the light green area (Area-1), then using image processing technique and photogrammetry, the distance from the LED to the camera is measured, using which the horizontal distance is calculated.Thus, the coordinate information of Position-2 is found [TeX:] $$(x, y)$$. It should be noted that when the MR moves from one position to another, the projected image moves in the opposite direction in the CIS. A threshold is defined based on [TeX:] $$n_{\min }$$ below which the OCC link can be disconnected. When the MR just enters Area-2 (Position-3), the number of stripes reaches the threshold, and the MR calculates the coordinates of the LED cell, using which the position of the LED is calculated. Thus, the direction of LED is easily located and the camera is tilted (tilt angle is calculated using [12]) in that direction. The latest coordinates are [TeX:] $$\left(x^{\prime}, y^{\prime}\right)$$ , which are further used to find the coordinates of the next position.

Fig. 5.

Coordinate information of the LED cell.
5.png
B. Theoretical analysis of tilt angle

As explained earlier, the camera doesn’t need to capture the full-LED image to decode the information for OCC. A part of the LED is enough to receive the required information. Recall[6] wherein the stripe width was found as [TeX:] $$S_{w}$$. The minimum number of pixels needed (occupied by the projected image) to decode data can be calculated as

(9)
[TeX:] $$A_{c m}=n_{\min } S_{w}=\frac{n_{\min }}{f T_{r}}$$

Let [TeX:] $$a_{l}^{\prime}$$ be the minimum portion of the LED radius that should at least appear in the CIS to decode databits. Therefore, the area[TeX:] $$A_{l}^{\prime}$$,occupied by [TeX:] $$a_{l}^{\prime}$$ is determined as

(10)
[TeX:] $$A_{l}^{\prime}=\left\{\begin{array}{ll} 2 \int_{a_{l}-a_{l}^{\prime}}^{a_{l}} \sqrt{a_{l}^{2}-x^{2}} d x, & a_{l}>a_{l}^{\prime} \\ \frac{A_{l}}{2}, & a_{l}^{\prime}=a_{l} \\ 2 \underset{a_{l}^{\prime}-a_{l}}{\int_{l}^{\prime}} \sqrt{a_{l}^{2}-x^{2}} d x, & a_{l}^{\prime}>a_{l} \\ A_{l}, & a_{l}^{\prime}=2 a_{l} \end{array} .\right.$$

Fig. 6.

Illustration of the camera orientation for (a)Position-1, (b)Position-2,and (c)Position-3.
6.png

Fig. 6 shows the camera orientation for the three positions mentioned in the previous section. As discussed earlier, Position-2 actually indicates that the MR remains in Area-1, whereas Position-3 indicates Area-2. When the horizontal distance [TeX:] $$d_{b, x}$$ is equal to the radius of Area-1, the new area of the projected image in terms of pixel numbers is found by updating[2] which is determined as

(11)
[TeX:] $$A_{c p}^{\prime}=\frac{A_{l}^{\prime} f_{o}^{2}}{\rho^{2} d_{a, b}^{2}}$

which is actually identical to [TeX:] $$A_{c m}$$. When the MR enters Area-2, the camera tilt angle [TeX:] $$\theta$$ is calculated as

(12)
[TeX:] $$\theta=\cot ^{-1} \frac{1}{2}\left(\frac{4 f_{o} d_{a, h}+2 L a_{l}^{\prime}+2 L \sigma+L d_{A O V}}{2 f_{o} a_{l}^{\prime}+2 f_{o} \sigma+f_{o} d_{A O V}-L d_{a, h}}\right)$$

where [TeX:] $$\sigma$$ is the extra distance traveled by the MR. This distance causes the LED to exit the camera’s AOV, as shown in Fig. 6.[TeX:] $$\sigma$$ can be measured from the moving speed of the MR. Eventually, the angular velocity of the camera is calculated from [TeX:] $$\theta$$ and traveling time of the MR. The tilt angle θ achieves the minimum value [TeX:] $$\left(\theta_{\min }\right)$$ when the projected image in CIS is equal to [TeX:] $$A_{c m}$$. It can be higher, but there exists a maximum threshold value, denoted by [TeX:] $$\theta_{\max }$$, beyond which the LED again disappears from the coverage area of CIS. The threshold angle is represented as follows

(13)
[TeX:] $$\theta_{\max }=\cot ^{-1} \frac{1}{2}\left(\frac{4 f_{o} d_{a, h}-2 L a_{l}^{\prime}-L d_{A O V}-2 L \sigma}{2 f_{o} a_{l}^{\prime}+2 f_{o} \sigma+f_{o} d_{A O V}+L d_{a, h}}\right)$$

It is obvious that the best tilt angle at which the [TeX:] $$S N R_{O C C}$$is maximized is found from the numerical average of [TeX:] $$\theta_{\min }$$ and[TeX:] $$\theta_{\max }$$. Furthermore, when no inflection is required, i.e., for the initial case, [TeX:] $$\alpha_{i n}$$ holds an equal value as [TeX:] $$\alpha_{i r}$$. In this case, the camera is kept fixed in its natural position. However, when inflection is required, [TeX:] $$\alpha_{i n}$$is no longer equal to [TeX:] $$\alpha_{i r}$$. Therefore as shown in Fig. 6, for a new horizontal distance [TeX:] $$d_{b, x}^{\prime}, \alpha_{i r}$$ has a new value [TeX:] $$\alpha_{i r}^{\prime}$$, which particularly depends on [TeX:] $$\sigma \cdot \alpha_{i r}^{\prime}$$is calculated as

(14)
[TeX:] $$\alpha_{i r}^{\prime}=\tan ^{-1}\left(\frac{d_{b, x}^{\prime}}{d_{a, h}}\right)$$

If the new angle of incidence is[TeX:] $$\alpha_{i n}^{\prime}$$, then it can be determined as follows

(15)
[TeX:] $$\alpha_{i n}^{\prime}(\mathrm{deg})=\left\{\begin{array}{ll} \theta-\alpha_{i r}^{\prime}, & \theta>\alpha_{i r}^{\prime} \\ \alpha_{i r}^{\prime}-\theta, & \theta<\alpha_{i r}^{\prime} \\ 0, & \theta=\alpha_{i r}^{\prime} \end{array}\right.$$

IF we consider that[TeX:] $$\theta=\theta_{\min }$$, then the equation of SNR received at the CIS is updated as follows

(16)
[TeX:] $$S N R_{O C C}=\frac{R A_{c p}^{\prime}}{q_{e} p_{n} f_{r}}\left[\frac{P_{t}\left(m_{l}+1\right)}{2 \pi d_{a, b}^{2}} g_{o p} \cos ^{m_{l}}\left(\alpha_{i r}^{\prime}\right) \cos \left(\alpha_{i n}^{\prime}\right)\right]^{2}$$

It is worth mentioning that the radius of Area-2 specifies how much coverage is increased using the inflection mechanism. The number of stripes is [TeX:] $$n_{\min }$$when the longest distance is achieved, consequently maximizing [TeX:] $$d_{b, x}^{\prime}$$ The value of the increased coverage depends on the intrinsic parameters of the camera, and the size, shape, and maximum irradiation angle of the LED and formulated as

(17)
[TeX:] $$\sigma_{\max }=\sqrt{\frac{f_{0}^{2} A_{l}^{\prime}}{\rho^{2} A_{c p}^{\prime}}-d_{a, h}^{2}}-\frac{1}{2}\left(d_{A O V}-2 a_{l}^{\prime}\right)$$

IV. PERFORMANCE EVALUATION

A. Simulation results

The coordinate information of the LED cell is chosen according to the increasing distance (in meters) inside the cell. In other words, there are finite lengths of the axes [TeX:] $$X X^{\prime}$$and [TeX:] $$Y Y^{\prime}$$, which are close to the LED cell’s diameter. To perform simulations, the system specifications are summarized in Table 1. Any changes in the camera and luminaire parameters will result in variations

Table 1.

System parameters for the simulation.
Parameters Value
LED radius,[TeX:] $$a_{l}$$ 5 cm
Transmitted optical power,[TeX:] $$P_{t}$$ 10 W
Half-intensity radiation angle,[TeX:] $$\Psi_{1 / 2}$$ [TeX:] $$60^{\circ}$$
Gain of optical filter, [TeX:] $$g_{o p}$$ 1.0
Frequency, [TeX:] $$f_{1}, f_{2}$$ 360 and 375 [TeX:] $$\mathrm{Hz}$$
Image sensor size 6 × 4 (3:2 aspect ratio)
Pixel edge length, [TeX:] $$\rho$$ [TeX:] $$2 \mu \mathrm{m}$$
Frame rate,[TeX:] $$f_{r}$$ 30 fps
Focal length, [TeX:] $$f_{o}$$ 4.2 mm
Responsivity, [TeX:] $$R$$ 0.51 A/W
Room height 3m
MR height 1m
LED cell radius 4m

Fig. 7.

Distribution of the total power received by the CIS inside the LED cell.
7.png

in the simulation results. All simulation data were collected using MATLAB.

In our simulation results, we assumed that the MR starts traveling from the origin of the LED cell. Fig. 7 demonstrates a 2D contour view of the power variation received by the CIS inside the LED cell. It can be seen that maximum power is observed at the origin and decreases gradually when the MR travels in any direction away from the origin. This variation in the received power significantly contributes toward the variation in overall SNR.

As mentioned earlier, whenever the MR stays inside Area1, the camera can communicate even it remains in the horizontal position and therefore, no inflection is required. However, inflection is essential whenever it moves inside Area-2. Fig. 8 demonstrates the simulation results of the minimum and maximum tilt angle requirements. The graphical representation indicates that the tilt angle increases from an initial value of zero in Area-1 as the MR moves further away from the origin inside Area-2. The automatic inflection entirely depends on the distance traveled by the MR. If the MR travels outside the LED cell, no optical power is received and the camera is automatically restored to the horizontal position. Since an LED cell radius of 4 m is chosen, the camera should be tilted as long as until MR only covers this distance.

Fig. 8.

Tilt angle variation with distance.
8.png

Fig. 9.

SNR distribution inside the LED cell with increasing distance from the origin.
9.png

Fig. 9 indicates the SNR variation inside the LED cell. Inside Area-1, the camera receives high SNR, which decreases significantly with the increasing distance. It can be seen that there is a dramatic fall when the MR nearly leaves Area-1. This is because only a part of the LED reaches the CIS as its projected image gradually disappears. Inside Area-2, the SNR is measured by assuming that the camera is tilted at the minimum tilt angle, which is indicated by the magenta line in Fig. 9.

In our proposed scheme, the MR measures its traveling distance and tilts the camera automatically. The main limitation of the fixed tilt angle is that the MR cannot detect the LED at all coordinates in the LED cell due to its limited AOV, which is shown in Fig. 10. The LED detection possibility is measured for four fixed tilt angles: [TeX:] $$0^{\circ}, 25^{\circ}, 45^{\circ}$$ , and [TeX:] $$60^{\circ}$$. The percentage of the area of the projected image in the CIS is represented in the horizontal axis. When MR is located at the origin, the LED occupies the maximum area in the CIS, which is assumed as 100% during the simulation. The figure shows that the MR cannot entirely cover the LED cell with a fixed camera position.

Fig. 10.

Comparison of the LED detection probability for different fixed tilt angles with the percentage of area occupied within the CIS (100% at the LED cell origin).
10.png

Table 1.

Key parameters for the implementation.
Parameters Value
Driver operating voltage 5 V
Operating current 40 mA
LED radius 5 cm
LED height from ground 2 m
Camera exposure time [TeX:] $$125 \mu \mathrm{s}$$
Focal length 4.2 mm (26 mm eff)
Camera aperture f/1.7
B. OCC testbed setup

We developed an experimental testbed for OCC to provide the proof of angle measurement theory. The testbed was also exploited to measure the BER performance of the proposed system. All the key parameters for the implementation are listed in Table 2. The LED driving circuitry includes a direct-current power supply, an open-source electronics prototyping platform (Arduino UNO R3), and a metal-oxide-semiconductor fieldeffect transistor for switching control. The Samsung Galaxy S7 edge camera was used as the receiver. OpenCV320 libraries were imported for the android application which were used to sample the captured frames. To minimize the interferences from other light sources inside the test room, we applied the RoI technique by controlling the focus points of the camera.

The OCC testbed platform is depicted in Fig. 11. As we did not attach the LED in the ceiling, we kept some installments unchanged during the experiment which include the height of the LED from the ground and the vertical distance between the camera and the LED [TeX:] $$\left(d_{a, h}\right)$$. The vertical distance was chosen as 1.7 m. Fig. 11(a) demonstrates the position of the camera where the horizontal distance [TeX:] $$\left(d_{b, x}\right)$$ between the camera and the LED is 0.7 m. The camera is not inflected because the LED is inside the AOV of the camera. However, as shown in Fig. 11(b), the LED is situated outside the AOV of the camera for a horizontal distance of 2.3 m. Therefore, the camera is tilted at a certain angle. As shown in Fig. 11(c), [TeX:] $$\theta_{\min }$$ is measured as [TeX:] $$22.5^{\circ}$$ which is almost the same as our simulation result. On the other hand,[TeX:] $$\sigma_{\max }$$is measured as 3.1 m, which can be further augmented us-ing a larger LED and a camera with higher [TeX:] $$d_{A O V}$$. The data rate achieved with [TeX:] $$\theta_{\min }=22.5^{\circ}$$is 40 bps, which is higher or identical to the existing frequency-shift keying based OCC systems. The comparison of our system with the existing schemes is presented in Table 3.

Table 1.

Comparison of the implemented OCC system with the existing schemes.
Reference Scheme Frame rate (fps) Modulation frequency Data rate (bps)
[23] FSK 30 [TeX:] $$0.52-4 \mathrm{kHz}$$ 10
[24] FSOOK 30 [TeX:] $$>2 \mathrm{kHz}$$ 10
[10] UFSOOK 30 [TeX:] $$120 \text { and } 105 \mathrm{Hz}$$ 15
[25] MFSK 20 [TeX:] $$2 \text { and } 4 \mathrm{kHz}$$ 40
This work [TeX:] $$\left(\theta_{\min }=22.5^{\circ}\right)$$ RS-FSOOK 30 [TeX:] $$360 \text { and } 375 \mathrm{Hz}$$ 40

Fig. 11.

Implemented OCC testbedphotos:(a)LEDpositionsinsidetheAOV of camera, (b) LED positions outside the AOV of camera, and (c) tilt angle measurement.
11.png

Fig. 12.

BER performance of the proposed system.
12.png

The measured BER performance is illustrated in Fig. 12 including comparison with the theoretical BER. We captured around 3000 frames for each position of the camera to obtain a precise BER. The BER was measured from a horizontal distance of 1m and inflecting the camera at [TeX:] $$\theta_{\min }$$ when the LED appeared outside the AOV of the camera. As shown in Fig. 12, the BER goes up with the increasing distance between the LED and the camera. Because of applying RoI technique, the curve of measured BER is found considerably close to the theoretical one.

V. CONCLUSIONS

In this article, we proposed an automatic camera inflection algorithm for MRs to detect the location of LEDs inside an LED cell using OCC. The MR employed herein uses the coordinates of the LED cell to detect the LED position in order to determine the direction in which the camera needs to be tilted. The coordinates are automatically updated when the MR moves to the next position. We also provided the maximum and minimum angle calculation theory with an OCC channel model and the LED detection principles. We used RS-FSOOK signaling in our proposed scheme because, in comparison with other schemes, this modulation technique exhibits better performance concerning distance variation. We have implemented a testbed for OCC to demonstrate the BER performance and the proof of the angle calculation concept. Future research will include a comprehensive analysis of the possible complexity and optimality of the proposed method in different cases and environments.

Biography

Moh. Khalid Hasan

Moh. Khalid Hasan (Member, IEEE) received the B.Sc. degree in Electrical and Electronic Engineering (EEE) from the Khulna University of Engineering & Technology (KUET), Khulna, Bangladesh, in May 2017, and the M.Sc. degree in Electronics Engineering, Kookmin University, South Korea, in August 2019. In 2019, he also received the Academic Excellence Award from Kookmin University for his research. Since September 2019, he has been a fulltime Researcher with the Wireless Communications and Artificial Intelligence Lab., Department of Electronics Engineering, Kookmin University. His current research interests include wireless communications, 6G, wireless security, and artificial intelligence.

Biography

Mostafa Zaman Chowdhury

Mostafa Zaman Chowdhury (Senior Member, IEEE) received the B.Sc. degree in Electrical and Electronic Engineering from the Khulna University of Engineering & Technology (KUET), Bangladesh, in 2002, and the M.Sc. and Ph.D. degrees in Electronics engineering from Kookmin University, South Korea, in 2008 and 2012, respectively. In 2003, he joined the Electrical and Electronic Engineering Department, KUET as a Lecturer, where he is currently working as a Professor. He was a Postdoctoral Researcher with Kookmin University from 2017 to 2019. He has published around 130 research papers in national and international conferences and journals. In 2008, he received the Excellent Student Award from Kookmin University. His three papers received the Best Paper Award at several international conferences around the world. He was involved in many Korean government projects. His research interests include convergence networks, QoS provisioning, small-cell networks, Internet of Things, eHealth, 5G and beyond communications, and optical wireless communication. He received the Best Reviewer Award 2018 by ICT Express journal. Moreover, he received the Education and Research Award 2018 given by Bangladesh Community in South Korea. He was a TPC Chair of the International Workshop on 5G/6G Mobile Communications in 2017 and 2018. He was the Publicity Chair of the International Conference on Artificial Intelligence in Information and Communication, 2019 and 2020. He served as a Reviewer for many international journals (including IEEE, Elsevier, Springer, ScienceDirect, MDPI, and Hindawi published journals) and IEEE conferences. He has been working as an Editor for ICT Express, an Associate Editor of IEEE ACCESS, a Lead Guest Editor for Wireless Communications and Mobile Computing, and a Guest Editor for Applied Sciences. He has served as a TPC member for several IEEE conferences.

Biography

Md. Shahjalal

Md. Shahjalal (Student Member, IEEE) received his B.Sc. degree in Electrical and Electronic Engineering (EEE) from Khulna University of Engineering & Technology (KUET), Bangladesh, in May 2017. In August 2019, he obtained his M.Sc. degree in Electronics Engineering from Kookmin University, South Korea and received the Excellent Student Award. Currently, he is pursuing his Ph.D. in Electronics Engineering at Kookmin University. His research interests include optical wireless communications, wireless security, non-orthogonal multiple access, internet of things, low-power wide-area network, and 6G mobile communications.

Biography

Md. Mainul Islam

Md. Mainul Islam (Graduate Student Member, IEEE) received the B.Sc. degree in Electrical and Electronic Engineering from the Khulna University of Engineering & Technology, Khulna, Bangladesh, in 2018. He is currently working toward the M.Sc. degree in Electronics engineering with the Wireless Communication and Artificial Intelligence Laboratory, Kookmin University, Seoul, South Korea. His research interests include blockchain, elliptic curve cryptography (ECC), data security, and IoT security. Mr. Islam served as a Reviewer for the IEEE Access, IEEE Systems Journal, and Transactions on Emerging Telecommunications Technologies..

Biography

Yeong Min Jang

Yeong Min Jang (Member, IEEE) received the B.E. and M.E. degrees in Electronics Engineering from Kyungpook National University, South Korea, in 1985 and 1987, respectively, and the Doctoral degree in computer science from the University of Massachusetts, USA, in 1999. He was with the Electronics and Telecommunications Research Institute from 1987 to 2000. Since 2002, he has been with the School of Electrical Engineering, Kookmin University, Seoul, South Korea, where he has been the Director of the Ubiquitous IT Convergence Center in between 2005 and 2010, the Director of the LED Convergence Research Center since 2010, and the Director of the Internet of Energy Research Center since 2018. He is currently a Life Member of the Korean Institute of Communications and Information Sciences (KICS). His research interests include 5G/6G mobile communications, Internet of energy, IoT platform, AI platform, eHealth, smart factory, optical wireless communications, optical camera communication, and the Internet of Things. He has organized several conferences and workshops, such as the International Conference on Ubiquitous and Future Networks from 2009 to 2017, the International Conference on ICT Convergence from 2010 to 2016, the International Conference on Artificial Intelligence in Information and Communication from 2019 to 2021, the International Conference on Information Networking in 2015, and the International Workshop on Optical Wireless LED Communication Networks from 2013 to 2016. He has received numerous awards including the Young Science Award (2003) from the Korean Government, KICS Dr. Irwin Jacobs Award (2017), and Outstanding Research Awards from the College of Creative Engineering, Kookmin University. He was the President of KICS in 2019. He serves as the Co-Editor-in-Chief of ICT Express (Elsevier). He had been the Steering Chair of the Multi-Screen Service Forum from 2011 to 2019 and the Society Safety System Forum since 2015. He served as the IEEE 802.15.7m Optical Wireless Communications TG. He is the Chairman of IEEE 802.15.7a Higher Speed, Longer Range Optical Camera Communication (OCC) TG.

References

  • 1 M. Z. Chowdhury, M. K. Hasan, M. Shahjalal, M. T. Hossan, Y. M. Jang, "Optical wireless hybrid networks: Trends, opportunities, challenges, and research directions," IEEE Commun.Surveys&Tuts.Secondquarter, vol. 22, no. 2, pp. 930-966, 2020.custom:[[[-]]]
  • 2 M. Z. Chowdhury, M. T. Hossan, A. Islam, Y. M. Jang, "A comparative survey of optical wireless technologies: Architectures and applications," IEEE Access, vol. 6, pp. 9819-9840, Jan, 2018.doi:[[[10.1109/ACCESS.2018.2792419]]]
  • 3 V. P. Rachim, W. Chung, "Multilevel intensity-modulation for rolling shutter-based optical camera communication," IEEE Photonics Technol. Letts., vol. 30, no. 10, pp. 903-906, May, 2018.custom:[[[-]]]
  • 4 B. Lin, Z. Ghassemlooy, C. Lin, X. Tang, Y. Li, S. Zhang, "An indoor visible light positioning system based on optical camera communications," IEEE PhotonicsTechnol.Letts.1, vol. 29, no. 7, pp. 579-582, Apr, 2017.custom:[[[-]]]
  • 5 L. Bai, Y. Yang, C. Guo, C. Feng, X. Xu, "Camera assisted received signal strength ratio algorithm for indoor visible light positioning," IEEE Commun.Letts., vol. 23, no. 11, pp. 2022-2025, Nov, 2019.custom:[[[-]]]
  • 6 T. Yamazato et al., "Vehicle motion and pixel illumination modeling for image sensor based visible light communication," IEEE J.Sel.AreasCommun., vol. 33, no. 9, pp. 1793-1805, May, 2015.doi:[[[10.1109/JSAC.2015.2432511]]]
  • 7 T. Do, M. Yoo, "Multiple exposure coding for short and long dual transmission in vehicle optical camera communication," IEEE Access, vol. 7, pp. 35148-35161, Mar, 2019.custom:[[[-]]]
  • 8 W. Huang, Z. Xu, "Characteristics and performance of image sensor communication," IEEE PhotonicsJ., vol. 9, no. 2, pp. 1-19, Apr, 2017.custom:[[[-]]]
  • 9 A. Islam, M. T. Hossan, Y. M. Jang, "Convolutional neural network scheme-based optical camera communication system for intelligent internet of vehicles," International J. Distributed Sensor Netw., vol. 14, no. 4, Apr, 2018.custom:[[[-]]]
  • 10 P. Ji, H. Tsai, C. Wang, F. Liu, "Vehicular visible light communications with LED taillight and rolling shutter camera," in Proc. IEEE VTC Spring, pp. 1-6, May, 2014.custom:[[[-]]]
  • 11 F. Ahmed, M. K. Hasan, M. Shahjalal, M. M. Alam, Y. M. Jang, "Experimental demonstration of continuous sensor data monitoring using neural network-based optical camera communications," IEEE Photonics J., Aug, 2020.custom:[[[-]]]
  • 12 P. Hu, P. H. Pathak, X. Feng, H. Fu, P. Mohapatra, "Colorbars: Increasing data rate of LED-to-camera communication using color shift keying," in Proc.ACMCoNEXT, Dec, 2015.custom:[[[-]]]
  • 13 T. Yamazato et al., "Image-sensor-based visible light communication for automotive applications," IEEE Commun.Mag., vol. 52, no. 7, pp. 88-97, July, 2014.doi:[[[10.1109/MCOM.2014.6852088]]]
  • 14 J. M. Kahn, J. R. Barry, "Wireless infrared communications," Proc. IEEE, vol. 85, no. 2, pp. 265-298, Feb, 1997.custom:[[[-]]]
  • 15 M. K. Hasan, M. Z. Chowdhury, M. Shahjalal, V. T. Nguyen, Y. M. Jang, "Performance analysis and improvement of optical camera communication," AppliedSciences, vol. 8, no. 12, Dec, 2018.custom:[[[-]]]
  • 16 A. Ashok, M. Gruteser, N. B. Mandayam, J. Silva, M. Varga, K. J. Dana, "Challenge: Mobile optical networks through visual MIMO," in Proc.ACMMobiCom, pp. 105-112, Sept, 2010.custom:[[[-]]]
  • 17 S. Teli, W. A. Cahyadi, Y. H. Chung, "Optical camera communication: Motion over camera," IEEE Commun. Mag., vol. 55, no. 8, pp. 156-162, Aug, 2017.doi:[[[10.1109/MCOM.2017.1600923]]]
  • 18 K. Ebihara, K. Kamakura, T. Yamazato, "Layered transmission of space-time coded signals for image-sensor-based visible light communications," J.LightwaveTechnol., vol. 33, no. 20, pp. 4193-4206, Oct, 2015.custom:[[[-]]]
  • 19 P. Luo, M. Zhang, Z. Ghassemlooy, S. Zvanovec, S. Feng, P. Zhang, "CUndersampled-based modulation schemes for optical camera communications," IEEE Commun.Mag., vol. 56, no. 2, pp. 204-212, Feb, 2018.custom:[[[-]]]
  • 20 R. D. Roberts, "Undersampled frequency shift ON-OFF keying (UFSOOK) for camera communications (CamCom)," in Proc. WOCC, May, 2013.custom:[[[-]]]
  • 21 C. W. Chow, C. Y. Chen, S. H. Chen, "Enhancement of signal performance in LED visible light communications using mobile phone camera," IEEE PhotonicsJ., vol. 7, no. 5, pp. 1-7, Oct, 2015.custom:[[[-]]]
  • 22 M. K. Hasan, N. T. Le, M. Shahjalal, M. Z. Chowdhury, Y. M. Jang, "Simultaneous data transmission using multilevel LED in hybrid occ/lifi system: Concept and demonstration," IEEE Commun. Letters, vol. 23, no. 12, pp. 2296-2300, Dec, 2019.custom:[[[-]]]
  • 23 H. Y. Lee, H. M. Lin, Y. L. Wei, H. I. Wu, H. M. Tsai, K. C. J. Lin, "Rollinglight: Enabling line-of-sight light-to-camera communications," in Proc.MobiSys, May, 2015.custom:[[[-]]]
  • 24 N. Rajagopal, P. Lazik, A. Rowe, "Visual light landmarks for mobile devices," in Proc.ACM/IEEE IPSN, Apr, 2014.custom:[[[-]]]
  • 25 M. Shahjalal, M. T. Hossan, M. K. Hasan, M. Z. Chowdhury, N. T. Le, Y. M. Jang, "An implementation approach and performance analysis of image sensor based multilateral indoor localization and navigation system," WirelessCommun.MobileComput., vol. 2018, Oct, 2018.doi:[[[10.1155/2018/7680780]]]