You are here

Multiple Access, Modulation and Transceivers


Research topics:

Multiple-access communications for future wireless systems
Dr. Lie-Liang Yang
With the substantial increase of Internet users and with the development of new services high-speed access in the future generations of wireless systems is an important requirement. Consequently, broadband systems with bandwidths much wider than that of the 3rd-generation systems are required for meeting future requirements. Hence, compatibility with both the emerging Broadband Access Networks (BRAN), which have opted for a multi-carrier, Orthogonal Frequency Division Multiplexing (OFDM) based solution and the existing 2nd- and 3rd-generation CDMA ystems is an important consideration. A potential candidate multiple access scheme meeting these requirements has been proposed by our group. The multiple-access scheme is constituted by frequency-hopping (FH) based multicarrier DS-CDMA (FH/MC DS-CDMA), where the entire bandwidth of future systems can be divided into a number of sub-bands and each sub-band can be assigned a subcarrier. According to the prevalent service requirements, the set of legitimate subcarriers can be distributed in line with the instantaneous information rate requirements. FH techniques are employed for each user, in order to occupy the whole system bandwidth and to efficiently utilize the system's frequency resources. Specifically, slow FH, fast FH or adaptive FH techniques can be utilized depending on the system's design and the state-of-the-art. In FH/MC DS-CDMA systems the sub-bands are not required to be of equal bandwidth. Hence existing 2nd- and 3rd-generation CDMA systems can be supported using one or more subcarriers, consequently simplifying the frequency resource management and efficiently utilizing the entire bandwidth available. This regime can also remove the spectrum segmentation of existing `legacy' systems, while ensuring compatibility with future BRAN and un-licensed systems. Furthermore, a number of sub-channels with variable processing gains can be employed, in order to support various services requiring low- to very high-rate transmissions, for example for wireless Internet access.

Figure 1 : Transmitter diagram of the frequency-hopping multicarrier DS-CDMA system.

The transmitter of the proposed FH/MC DS-CDMA scheme is depicted in Figure 1. Each subcarrier of the K users in the
system is assigned a PN sequence, which produce spread, wideband signals. In the figure, C(Q,Uk) represents a constant-weight code of user k with $Uk$ number of `1's and (Q-Uk) number of `0's, hence, the weight of C(Q,Uk) is $Uk$. This code is read from a so-called constant-weight code book, which represents the frequency-hopping patterns. The constant-weight code plays two different roles. Its first role is that its weight - namely $Uk$ - determines the number of subcarriers involved, while its second function is that the positions of the $Uk$ number of binary `1's determines the selection of a set of $Uk$ number of subcarrier frequencies from the Q number of outputs of the frequency synthesizer. High-rate and variable rate transmissions can be implemented by employing a different number of subcarriers. At the transmitter of the k-th user in Figure 1 the bit stream is first serial-to-parallel (S-P) converted, yielding Uk parallel streams, which is controlled by the constant-weight code C(Q,Uk). After S-P conversion each stream is direct-sequence spread, in order to form the spread, wideband signal. These spread signals then modulate their corresponding subcarriers, and finally are summed, in order to form the transmitted signal sk(t).

The objective of this research is to investigate the relevant techniques in the context of FH/MC DS-CDMA, including modulation/demodulation, error-control, synchronisation, equalisation, multiuser interference suppression, adaptive detection, etc.
[top]

Residue Number System (RNS) and Redundant Residue Number System (RRNS): Theory and applications
Dr. Lie-Liang Yang
The theory of RNS originated from the Chinese, dating back as early as the third century. However, it was not widely accepted as an alternative arithmetic approach in digital systems, since complex computations are encountered in the conversion from the so-called residue to decimal or decimal to residue number system. Nevertheless, commencing with the rapid evolution of digital computer and microelectronics technology in the 1950s, RNS arithmetics have attracted considerable attention in the field of designing high-speed special-purpose digital hardware that is suitable for very large scale integration (VLSI). Digital systems that are structured around RNS arithmetic units may play an important role in future ultra speed dedicated real-time systems that support pure parallel processing. The objective of this project is to investigate the theories of RNS and RRNS, and their applications in wireless communications. The present study focuses on RRNS based coding theory, which includes encoding and decoding methods of low complexity, systematic RRNS coding as well as Turbo RRNS coding. In wireless communications RRNS is used for information protection, supporting high-rate and variable rate services and parallel transmissions.
[top]
Multi-carrier CDMA research
Byoung Jo Choi
Multi-carrier CDMA is an enabling technology for future mobile phones. Future mobile phone services are likely to be endowed with crystal-clear integrated digital TV, digital Hi-Fi radio, high-speed internet browsing as well as the video phone capabilities. All these services require efficient transmission of multimedia traffic. Multi-carrier CDMA is likely to be the transmission method of future mobiles due to its bandwidth efficiency and inherent diversity over fading channel. However, the multi-carrier signals show highly varying envelope power waveforms, which hinder the popular employment of multi-carrier CDMA. Our research focus is to study this phenomenon and to provide some practical solutions for it. Another important topic is to use adaptive modulation, exploiting the channel quality fluctuations, so that more multimedia traffic can be exchanged using the same bandwidth.

Figure 2 : MC-CDMA and DS-CDMA use the whole bandwidth for a symbol to exploit frequency diversity, while OFDM uses single subcarrier for it. MC-CDMA and OFDM are resilent to inter-symbol-interference and spectrally efficient. MC-CDMA posesses both merits of DS-CDMA and OFDM.

[top]

Burst-by-burst adaptive CDMA research
Dr. Ee Lin Kuan
In 1992, during the World Administrative Radio Conference, the operating bands for the third-generation (3G) mobile radio
communications systems were identified as the bands of 1885--2025 MHz and 2110--2200 MHz. The launch of the 3G systems is scheduled for the year 2000, leading to the collective name of International Mobile Telecommunications 2000 (IMT-2000). The main design aim for the 3G systems was to provide global support for a variety of multimedia services at a quality similar to that over fixed networks, while ensuring high spectral efficiencies with the targets of :
  • 384 kbits/s transmission rate for full area coverage
  • 2 Mbits/s transmission rate for local area coverage

In order to support a variety of multimedia applications, variable bit rate and packet transmission capabilities are essential. There
is also a need for flexibility such that different services could be multiplexed within the same environment.

The standardization process for 3G systems commenced with a number of proposals from all over the world, but the standard has
now been narrowed down to four major proposals. The common feature in these proposals is the employment of Wideband CDMA (W-CDMA) as the air interface. In W-CDMA, the increased bandwidth of at least 5 MHz facilitates a variety of performance advantages over previous cellular systems. With the increase in bandwidth, the target bit rate of 384 kbits/s is achievable while ensuring a reasonable user capacity. The paucity of available spectrum calls for a high area spectral efficiency, which can be provided by CDMA. Furthermore, the wide bandwidth enables higher diversity gains to be obtained through the higher-order multipath diversity
schemes facilitated by the widely spread W-CDMA signal. This is provided that all the resolvable multipath components can be
combined due to the higher tolerable receiver complexity. In addition to this, further performance improvements can be obtained
through a variety of techniques, such as the application of antenna arrays, multiuser detection or space-time coding. The
employment of CDMA enables multi-rate services to be provided upon involving various methods, such as multi-code transmission,
multiple spreading factor transmission and modulation mode multiplexing. Some of the key properties emphasized in W-CDMA are:

  • Improved performance over second generation systems including improved user capacity and improved quality coverage
  • High service flexibility including the support of a wide range of services having bit rates up to 2 Mbits/s
  • A fast and efficient packet access scheme
  • Support of evolutionary technologies, such as adaptive antenna arrays and multi-user detection

Direct Sequence Code Division Multiple Access (DS-CDMA) is interference-limited due to the multiple access interference (MAI)
generated by the users transmitting within the same bandwidth simultaneously. The signals from the users are separated by means of
spreading sequences that are unique to each user. These spreading sequences are usually non-orthogonal. Even if they are orthogonal, the asynchronous transmission or the time-varying nature of the mobile radio channel may partially destroy this orthogonality. The non-orthogonal nature of the codes results in MAI, which degrades the performance of the system. The frequency selective mobile radio channel also gives rise to inter-symbol interference (ISI) due to multi-path propagation. This is exacerbated by the fact that the mobile radio channel is time-varying. A class of CDMA receivers known as multiuser receivers exploit the available information about the spreading sequences and mobile channel impulse responses of all the CDMA users in order to improve the performance of the CDMA users. These multiuser receivers include joint detection (JD) receivers, interference cancellation (IC) receivers, tree-search type algorithms and iterative receiver schemes. Figure 3 shows a general classification of the various types of multiuser receivers.

Figure 3 : Classification of CDMA detectors

Mobile radio signals are subject to propagation path loss as well as slow fading and fast fading. Due to the nature of the fading
channel, transmission errors occur in bursts when the channel exhibits deep fades or when there is a sudden surge of multiple
access interference (MAI) or inter-symbol interference (ISI). Adaptive-rate CDMA techniques can be used to overcome this
phenomenon, where the information rate is varied in accordance with the channel quality. The information rate is chosen
accordingly, in order to provide the best trade-off between the BER and throughput performance for a given application. The signal
to interference plus noise ratio (SINR) at the output of the JD receiver is derived and this parameter is used as the criterion
for adapting the information rate. There are various methods of varying the information rate are considered, including Adaptive
Quadrature Amplitude Modulated (AQAM) and the Variable Spreading Factor (VSF) scheme. AQAM is an adaptive-rate technique, whereby the data modulation mode is chosen according to some criterion related to the channel quality. On the other hand, in VSF transmission, the information rate is varied by adapting the spreading factor of the CDMA codes used, while keeping the chip rate constant. Figure 4 shows the stylized amplitude variation in a fading channel and the switching of the
modulation modes in a four-mode AQAM system, where the performance degrades but the throughput increases when switching from Mode 1 to 4.

Figure 4 : Basic concept of a four-mode AQAM transmission in a narrowband channel. The variation of the modulation mode follows the fading variation of the channel over time.

[top]

Ubiquitous antenna and OFDM research
Dr. Minorou Okada
Since 1999 Minoru - who is a lecturer at the University of Osaka in Japan - has been a visiting research fellow in the group, spending a
sabbatical year with us on leave from the University of Osaka. Minoru is a member of the IEEE in the USA and the IEICE in Japan. He received the young engineer award from the IEICE in 1999. His research interests include the physical layer of mobile multimedia
communications systems. In order to establish a multimedia communication system, high speed, high-reliability digital
transmission is required. However, achieving this is not an easy task in the mobile communication environment, since the channel is not only bandwidth limited, but also impaired by the multipath fading and interference from the users transmitting at the same frequency. In
order to solve these problems, he is researching a range of specific topics, some of which are higlighted below.

Ubiquitous antenna based systems: According to this concept ahigh number of antennas are distributed over the service area. Each antenna and the base station are connected via optical fibre,conveying the radio signals directly over the optical-fibre cable. The
ubiquitous antennae not only reduce the required number of radio basestations, which reduces the cost of the overall system, but also allow us to employ sophisticated signal processing techniques such as diversity reception, interference cancellation, and multi-user
detection. Currently Minoru is studying simple interference cancellation algorithms for the ubiquitous antenna.

Source matched adaptive transmission: The required bit rate, theacceptable error rate and delay of a wireless communications link
varies depending on the type of information transmitted. Upon varyingthe transmission power, the forward error correction code and the modulation format, we can improve the bandwidth efficiency of wirelesssystems. The optimum algorithm is sought that can be used to control the transmission schemes adaptively according to the type of information.
[top]

Beamforming assisted burst-by-burst wideband Adaptive Quadrature Amplitude Modulation
Hafizal Mohamad
The rapid growth of wireless communications users has been observed over the last few years. Statistics show a very promising
future for wireless communications systems as the increasing demand for people to be connected from any place at any time.

Burst-by-burst Adaptive Quadrature Amplitude Modulation (AQAM) is studied in conjunction with adaptive beamforming, diversity and interference cancellation in multiuser environments. The system's performance is further improved upon invoking various error correction techniques, in oarticular space-time coding.
[top]

Wireless multi-user OFDM systems
Matthias Munster
In a typical multi-cellular environment a signal transmitted from a handset to the basestation is subjected to a variety of impairments.
The most obvious impairment is the contemination of the transmitted signal by thermal noise at the receiver. Secondly, the signal quality suffers from multipath propagation, which implies that several delayed replicas of the same signal arrive at the receiver antenna. This inflicts inter-symbol interference, which heavily degrades the signal quality and hence must be compensated by equalization. In addition the channel might impose time variant fading due to the mobility of the users or the deflectors and scatterers. In a multi-cellular environment we additionally encounter interference from other users and basestations.

Figure 5 : Block diagram of an OFDM based transmission system.

The problem of inter-symbol interference can be elegantly addressed by employing OFDM ( Orthogonal Frequency Division Multiplexing ), which is a multi-carrier transmission scheme conceived back in the 60's. With the availability of high-performance signal-processing devices it has been re-discovered since the operation of multi-carrier modulation can be efficiently implemented by means of a Fast Fourier Transform (FFT). As opposed to single-carrier transmission, OFDM seeks to avoid inter-symbol interference by appending a guard interval to each FFT output block - before signal interpolation and up-conversion to the HF ( High Frequency ) stage - at the cost of a slight reduction in bandwidth efficiency. The problem of channel equalization at the receiver is simplified substantially. In multiuser environments of higher mobility, the OFDM scheme suffers from inter-subcarrier interference, which is due to a loss of orthogonality between different sub-channels. OFDM is well established in a variety of services such Digital Audio Broadcasting, Asynchronous Digital Subscriber Lines and the new Hyperlan II standard.

Matthias' research is mainly concerned with the development of signal processing algorithms for tailoring OFDM to the needs of a
multi-cellular environment, of which a very important aspect is the signal separation of simultaneously transmitting users. This
is could be achieved with multiple-antenna assisted reception at the basestation, which requires further investigation in
conjunction with OFDM. Related aspects are channel parameter estimation as well as synchronization mechanisms.
[top]

Application of ``support vector machines'' in CDMA multiuser detection
Ahmad Kamsani Samingan
The so-called support vector machines (SVM) originated from statistical learning theory. This method minimises the so-called
structural risk, rather than empirical risk that is used in many learning procedures. The simplicity of SVMs and their sound
theoretical background render them attractive in various fields, including mobile communications. A particular application can be
found in channel equalisers.

The SVM approach to the design of decision feedback equalisers (DFE) was found to outperform the minimum mean square error (MMSE) approach, which is broadly used in practice.

Although the advantages of SVM are apparent, its application inCDMA multiuser detection is still in its infancy. Apart from its
conceptual simplicity, SVMs promise reduction in complexity andimprovement in performance compared to the existing
state-of-the-art solutions.
[top]

Wideband adaptive full response multilevel transceivers and equalizers
Dr. Choong Hin Wong
The scope of this research covers a wide range of interesting physical layer topics, which includes channel coding, modulation and equalization techniques. However, the main challenge of this research is to {improve the information throughput of a mobile cellular network} within a given bandwidth. This is achieved by adapting the transmission information rate according to the prevalent mobile channel conditions. Furthermore, equalization techniques are studied and invoked, in order to ensure reliable information transmission.

In most cellular systems, the transmitted information symbols experience both amplitude and phase distortion as a result of the
dispersive and time varying mobile propagation channel. In a system which transmits a sequence of pulse-shaped information
symbols, the time domain full response signalling pulses are smeared by the hostile dispersive mobile channel, resulting in
intersymbol interference. However at the receiver, the distorted signal can be restored by utilizing an equalizer in order
to recover the transmitted information symbols. Consequently the transmission quality can be enhanced by utilizing an equalizer. This process is exemplified by Figure 6 which displays the transmitted, channel distorted and equalized information
symbols.

The transmitted information symbols also experience rapid signal fluctuation as a result of the mobile propagation channel, which is termed as fast fading. Consequently a time varying channel quality is produced, which can be exploited in order to increase the transmission throughput. This can be achieved by using Adaptive Quadrature Amplitude Modulation (AQAM) whereby modulation modes with a higher information throughput can be used, when the channel quality is favourable. Conversely, a more robust modulation mode with a lower information throughput is utilized, when a degraded channel quality is encountered. Consequently, on average, by adapting the modulation modes we can increase the information throughput, while maintaining a certain target transmission quality.

a) Channel distorted signal

b) Equalized signal

Figure 6 : The effect of channel distortion on the transmitted symbols and the corresponding equalized symbols.

[top]

Turbo equalisation algorithms for full and partial response modulation
Dr. Bee Leong Yeap
Motivation: Some of the key objectives of the third generation wireless system proposals are to utilise high spectrum efficiency schemes --- compared to the existing second generation system --- and to provide full and limited coverage for bit rates of 144 Kbps (preferably 384 Kbps) and 2 Mbps, respectively. However, there are problems associated with high data rate transmissions and with employing spectrally efficient systems. Systems transmitting at high bit rates, such as 2 Mbps, experience a high grade of channel-induced dispersion and hence suffer from Inter Symbol Interference (ISI). Hence, the equaliser technology employed must be capable of mitigating the effects of ISI. In our research, a joint channel equalisation and channel decoding technique, known as turbo equalisation, has been investigated in the context of partial response Continuous Phase Modulation (CPM) schemes, namely Gaussian Minimum Shift Keying and spectrally efficient multi-level Quadrature Amplitude Modulation techniques, since turbo equalisation has been shown to effectively mitigate the effects of channel-induced ISI and the controlled ISI imposed by partial response modulators.


a) One iteration


b) Two iterations


c) Four iterations


d) Six iterations

Figure 7 : Reliability of the decoder's so-called Logarithmic Likelihood Ratio after one, two, four and six turbo equalisation iterations. The reliability is expressed as the product of the LLR values produced by the decoder and its corresponding transmitted source bit.

Brief Overview of the Turbo Equalisation Principles: The basic principle of turbo equalisation is that information, which reflects the reliability of the estimated channel encoded bits, is exchanged between the channel equaliser and channel decoder iteratively. When a certain iteration termination criterion is satisfied, the information related to the source bit --- instead of the encoded bit --- is determined. This information is typically in the form of the Log Likelihood Ratio (LLR), which is the natural logarithm of the probabilities that the information transmitted assumes its two possible values i.e. +1 and $-1$. With each successive iteration, the channel equaliser and channel decoder provides more reliable LLR values, which are associated to the transmitted source bits and the encoded bits. In Figure 7 the reliability of the LLR values --- expressed as the product of the decoder LLR values with the corresponding source bits --- is plotted against the source bit's position. A large positive reliability value indicates a high-confidence LLR value, whereas a negative value represents an erroneous decision. It is observed that in the first iteration of Figure 7.a there was a large number of low reliability LLR values. The negative reliability values in Figure 7.a indicate that decision errors have been made. In the second turbo equalisation iteration of Figure 7.b the number of low-confidence LLR values was reduced as compared to the first iteration. With increasing turbo equalisation iterations --- for example after six turbo equalisation iterations, as shown in Figure7.d --- the number of low-confidence LLR values was significantly reduced, hence justifying the improved BER performance obtained with an increasing number of turbo equalisation iterations.

Current Research: Iterative multi-user interference detection employing the turbo equalisation strategy is an attractive research topic. Recent work utilising turbo equalisation has been successful in mitigating the interference inflicted by other Code Division Multiple Access (CDMA) users, such that the near single-user performance is achieved.

Turbo equalisation for Trellis-Coded Modulation is a further topic investigated in the recent literature in order to improve the
spectral efficiency of the system --- which is one of the key objectives of mobile radio research.
[top]

Radial basis function network based burst-by-burst adaptive transceivers
Dr. Mong Suan Yee
Artificial neural networks (ANN) draw their inspiration from the structure and operational mechanism of the human brain. ANNs do
not attempt to model faithfully the neuro-biology of the human brain, but rather employ the abstract notions of how the brain
functions. A salient feature of the brain is its ability to learn and to adapt appropriately to changing circumstances. We can make
use of different ANN structures and employ learning algorithms in the field of wireless communications, where 'learning' assists in
improving the system's performance. The analogy of biological and artificial neurons is characterised with the aid of
Figure 8.

Mong Suan's research is based on using an ANN structure referred to as the Radial Basis Function (RBF) network for channel equalisation. The RBF based equaliser design is investigated in the context of various Quadrature Amplitude Modulation (QAM) schemes in conjunction with turbo channel coding and iterative decoding / equalisation techniques. Since the schemes investigated may become complex, computational complexity reduction methods are also investigated. The received and transmitted 4QAM signal constellation along with the error-free decided constellation is seen in Figure 9.

a) Anatomy of a typical neuron

b) An artificial neuron

Figure 8 : Comparison between biological and artificial neurons

a) Channel distorted signal

b) Equalized signal

Figure 9 : The effect of channel distortion on the transmitted 4QAM symbols and the corresponding equalised symbols.

[top] [Main Page]

University of Southampton: