online blackjack for real money tri card poker how to play las vegas blackjack rules flash casinos for mac best Classic Slot machines All our recommended sites best california casino sites



User login

User menu

You are here


  • Highly-parallel algorithms and architectures for high-throughput wireless receivers

    During the past two decades, reliable wireless communication at near-theoretical-limit transmission throughputs has been facilitated by receivers that operate on the basis of the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. Most famously, this algorithm is employed for turbo error correction in the Long Term Evolution (LTE) standard for cellular telephony, as well as in its previous-generation predecessors. Looking forward, turbo error correction promises transmission throughputs in excess of 1 Gbit/s, which is the goal specified in the IMT-Advanced requirements for next-generation cellular telephony standards. Throughputs of this order have only very recently been achieved by State-Of-the-Art (SOA) LTE turbo decoder implementations. However, this has been achieved by exploiting every possible opportunity to increase the parallelism of the BCJR algorithm at an architectural level, implying that the SOA approach has reached its fundamental limit. This limit may be attributed to the data dependencies of the BCJR algorithm, resulting in an inherently serial nature that cannot be readily mapped to processing architectures having a high degree of parallelism.

    Against this background, we propose to redesign turbo decoder implementations at an algorithmic level, rather than at the architectural level of the SOA approach. More specifically, we have recently been successful in devising an alternative to the BCJR algorithm, which has the same error correction capability, but does not have any data dependencies. Owing to this, our algorithm can be mapped to highly-parallel many-core processing architectures, facilitating an LTE turbo decoder processing throughput that is more than an order of magnitude higher than the SOA, satisfying future demands for gigabit throughputs. We will achieve this for the first time by developing a custom Field Programmable Gate Array (FPGA) architecture, comprising hundreds of processing cores that are interconnected using a reconfigurable Benes network. Furthermore, we will develop custom Network-on-Chip (NoC) architectures that facilitate different trade-offs between chip area, energy-efficiency, reconfigurability, processing throughput and latency. In parallel to developing these high-performance custom implementation architectures, we will apply our novel algorithm to both existing Graphics Processing Unit (GPU) and NoC architectures. This will grant us a rapid pace, allowing us to apply our novel algorithm to not only error correction, but to all aspects of receiver operation, including demodulation, equalisation, source decoding, channel estimation and synchronisation. Drawing upon our high-throughput algorithms and highly-parallel processing architectures, we will develop techniques for holistically optimising the algorithmic and implementational parameters of both the transmitter and receiver. This will facilitate practical high-performance schemes, which can pave the way for future generations of wireless communication.

    This research addresses key EPSRC priorities in the Information and Communication Technologies theme (, including 'Many-core architectures and concurrency in distributed and embedded systems' and 'Towards an intelligent information infrastructure'. The 'Working together' priority is also addressed, since this cross-disciplinary research will develop new knowledge that spans the gap between high-performance communication theory and high-performance hardware design. This research will offer new insights into the design of many-core architectures, which the hardware design community will be able to apply in the design of general purpose architectures. Furthermore, the communication theory community will be able to apply our algorithms across even wider aspects of receiver operation.

  • Channel Decoder Architectures for Energy-Constrained Wireless Communication Systems: Holistic Approach

    The Machine-To-Machine (M2M) applications of Wireless Sensor Networks (WSNs) and Wireless Body Area Networks (WBANs) are set to offer many new capabilities in the EPSRC themes of 'Healthcare technologies', 'Living with environmental change' and 'Global uncertainties', granting significant societal and economic benefits. These networks comprise a number of geographically-separated sensor nodes, which collect information from their environment and exchange it using wireless transmissions. However, these networks cannot as yet be employed in demanding applications, because current sensor nodes cannot remain powered for a sufficient length of time without employing batteries that are prohibitively large, heavy or expensive.

    In this work, we aim to achieve an order-of-magnitude extension to the battery charge-time of WSNs and WBANs by facilitating a significant reduction in the main cause of their energy consumption, namely the energy used to transmit information between the sensor nodes. A reduction in the sensor nodes' transmission energy is normally prevented, because this results in corrupted transmitted information owing to noise or interference. However, we will maintain reliable communication when using a low transmit energy by specifically designing channel code implementations that can be employed in the sensor nodes to correct these transmission errors. Although existing channel code implementations can achieve this objective, they themselves may have a high energy consumption, which can erode the transmission energy reduction they afford. Therefore, in this work we will aim for achieving a beneficial step change in the energy consumption of channel code implementations so that their advantages are maintained when employed in energy-constrained wireless communication systems, such as the M2M applications of WSNs and WBANs

    We shall achieve this by facilitating a significant reduction in the supply voltage that is used to power the channel code implementations. A reduction in the supply voltage is normally prevented, because this reduces the speed of the implementation and causes the processed information to become corrupted, when its operations can no longer be performed within the allotted time. However, we will maintain reliable operation when using a low supply voltage, by specifically designing the proposed channel code implementations to use their inherent error correction ability to correct not only transmission errors, but also these timing errors. To the best of our knowledge, this novel approach has never been attempted before, despite its significant benefits. Furthermore, we will develop methodologies to allow the designers of WSNs and WBANs to estimate the energy consumption of the proposed channel code implementations, without having to fabricate them. This will allow other researchers to promptly optimise the design of the proposed channel code implementations to suit their energy-constrained wireless communication systems, such as WSNs and WBANs. Using this approach, we will demonstrate how the channel coding algorithm and implementation can be holistically designed, in order to find the most desirable trade-off between complexity and performance. 

  • Cooperative Classical and Quantum Communications Systems

    According to Moore's law, the number of transistors on a micro-chip doubles every two years. Hence, the transistor size is expected to approach atomic scale in the near future due to our quest for miniaturization and more processing power. However, atomic level behaviour is governed by the laws of quantum physics, which are significantly different from those of classical physics. More explicitly, the inherent parallelism associated with quantum entities allows a quantum computer to carry out operations in parallel, unlike conventional computers. More significantly, quantum computers are capable of solving challenging optimization problems in a fraction of the time required by a conventional computer. However, the major impediment in the practical realization of quantum computers is the sensitivity of the quantum states, which collapse when they interact with their environment. Hence, powerful Quantum Error Correction (QEC) codes are needed for protecting the fragile quantum states from undesired influences and for facilitating the robust implementation of quantum computers. The inherent parallel processing capability of quantum computers will also be exploited to dramatically reduce the detection complexity in future generation communications systems.

    In this work, we aim for jointly designing and ameliorating classical and quantum algorithms to support each other in creating powerful communications systems. More explicitly, the inherent parallelism of quantum computing will be exploited for mitigating the high complexity of classical detectors. Then, near-capacity QEC codes will be designed by appropriately adapting algorithms and design techniques used in classical Forward Error Correction (FEC) codes. Finally, cooperative communications involving both the classical and quantum domains will be conceived. The implementation of a quantum computer purely based on quantum-domain hardware and software is still an open challenge. However, a classical computer employing some quantum chips for achieving efficient parallel detection/processing may be expected to be implemented soon. This project is expected to produce a 'quantum-leap' towards the next-generation Internet, involving both classical and quantum information processing, for providing reliable and secure communications networks as well as affordable detection complexity.

  • Cooperative Back-haul Aided Next-Generation Digital Subscriber Loops

    To meet the demand of exponentially growing tele-traffic and to sustain the current level of economical growth, a high-quality digital infrastructure based on innovative and cost efficient solutions is required. The current geo-economics and building-preservation of historic cities do not favour the pervasive penetration of fibre. Hence, a lower-cost solution based on the improved exploitation of the existing copper network is essential to facilitate transformation of the digital infrastructure to support the next evolutionary step to Giga bits/s data rates. However, experts from our industrial partner BT believe that the throughput achieved with the aid of the state-of-the-art copper technology may only represent less than 30% of its ultimate capacity, when we exploit the hitherto unexploited high-frequency band. Hence, the research of next-generation ultra-high-throughput DSL systems beyond becomes of crucial importance and timely, where radically new signal processing techniques have to be conceived.

    The challenge is to conquer the entire Very High Frequency (VHF) band and to holistically design the amalgamated wire-line and wireless system considered. Our proposed research starts from the fundamental understanding of the DSL channel over the entire VHF and beyond into UHF (up to 500 MHz) bands to the design of radical signal processing techniques for tackling the critical challenges. Holistic system optimisation is proposed for exploiting the full potential of copper. Thanks to BT's huge support, our proposed research has a high immediate engineering impact and a long-term scientific adventure.