Signal Processing (SP)

The Signal Processing Society (SP-1) caters to hardware, firmware and software engineers involved in signal processing techniques, implementation and apparatus. There is an IEEE Signal Processing Repository (SigPort) site that archives manuscripts, reports, theses, and supporting materials. Created and supported by the IEEE Signal Processing Society (SPS), SigPort collects technical material of interests to the broad signal processing community. Click on SigPort to learn more information about the repository.

Technical Meetings, Lectures and Events

ExCom Technical Meeting: How to Create IEEE Meetings and Display on IEEE LI Chapter Web Browsers?
The Long Island (LI) Chapter of IEEE Signal Processing Society (SPS) in collaboration with the executive committee, Employment Assistance Committee, ...
Read More
Welcome to the SPS Seasonal School on Signal Processing and Communication Systems for 5G. The jointly, internationally, organized by IEEE ...
Read More
Building low latency and low power SmartCity Applications
Welcome to the SPS Seasonal School on Signal Processing and Communication Systems for 5G. The jointly, internationally, organized by IEEE ...
Read More

News and Articles

No posts found.

This is the collection of the slides, viewgraphs and materials presented at the technical meetings and lectures of the Signal Processing.


Dr. Donaldson (2021-11-15)

The Long Island (LI) Chapter of IEEE Signal Processing Society (SPS) in collaborations with the executive committee, Employment Assistance Committee, Young Professionals, and Lifetime Members presents the following Technical Lecture: How to Create IEEE meetings and Display on IEEE LI Chapter Web Browsers?

Click here to register

Presentation Slides: PDF - Click to View

Machine Learning for Next Generation Wireless Communication Systems 5G/6G

Mr. Ashok Kumar Reddy Chavva, Samsung R&D Institute (2021-09-17)

With the emergence of fifth-generation (5G) networks, there has been a shift in the research focus towards exploring new technologies for the next- generation communication systems, sixth-generation (6G). The potential target expectations from 6G are to achieve even higher data rates, further reduction in latency and ultra massive machine type connection density compared to 5G. In this search for new technologies, there has been a significant interest in applying machine learning and artificial intelligence to communication systems. In this lecture, we motivate the AI/ML for wireless communications by starting with simple machine learning applications and similarities to communication use cases. During the course, you will learn about various wireless communication system blocks and the application of ML to them. We cover the applications at the physical and higher layers at both base station and user equipment of the wireless communication system. Later we briefly discuss the possibility of end-to-end conventional communication system replaced by an ML trained communication system. We will observe that ML/AI does not give benefits all the time. In the later part of this lecture, we cover model- based and model-free systems and how they evolve with continuous improvements. We will further discuss and exercise a step-by-step procedure on designing an ML application for a wireless communication system with the constraints of complexity, timing, performance, and training requirements. By the end of this lecture, you will have learned about various blocks of communication systems that ML trained systems can replace and where to apply and where not to apply ML/AI in wireless communication systems.

Click here to register

Building low latency and low power SmartCity Applications

Dr. Seong Hwan Kim, Xilinx (2021-09-16)

Computer vision and machine learning technology have advanced considerably in the past few years and are now being fused together to create a Smarter World of solutions that will soon improve our lives. For example, collectively retail stores lose billions of dollars annually, the National Retail Federation estimates that it was up to $60B in 2019. From Smart Cities to Factories, Hospitals and Buildings, our world is experiencing an explosion of computational power being applied to real-time video analytics. Imagine a missing child is spotted by a SmartCity application and returned to their parents. In a hospital setting, the application monitoring the cameras in patient’s room alerts the staff to a critical situation minutes or even seconds earlier. However, the entire end-to-end pipeline needs to be built for supporting low latency and low power for the edge where Smart city applications reside. Wireless 5G networks provide the fabric on which the video and AI/ML-driven computations and results travel. With latency targets for these smart technologies in the 10s of milliseconds, no other broadband network is up to the task. ML and video, combined with 5G, are making the Smart City solution deployable and more cost effective. In this presentation, we will discuss what technologies are available to build the optimal Smart City solutions and what the differences are.

Click here to register

GPU Acceleration for 5G Signal Processing and Machine Learning

Dr. Chris Dick, NVIDIA (2021-09-15)

As the rollout of 5G progresses and research for 6G begins, the key themes of softwarization, virtualization, open systems and artificial intelligence form foundational principles for communication systems of the future. The application of AI/MLto wireless communications an extremely active research area with many 10’s to 100’s of papers published weekly reporting new results on the application of AI/ML to the physical layer (L1), MAC layer (L2) and at the network optimization level. To realize the Industry’s vision of an AI/ML powered wireless future, a full stack solution supporting a software defined radio (SDR) approach for the vRAN, together with optimized silicon for AI, coupled with application development frameworks for AI/ML development is essential. NVIDIA GPU technology and associated CUDA programming model, together with arch suite of AI/ML SDKs (Software Development Kits) provides these capabilities. In this talk we present The Aerial software-defined GPU-based cloud native 5G NR RAN platform. Aerial implements not only 5G NR the baseband signal processing, but using GPU virtualization supports additional concurrently operating workloads, such as AI/ML inference, training and data analytics on this one hyper-converged system. We provide an overview of the L1 signal processing pipeline and describe efficient mechanisms for data movement between the GPU and NIC-based fronthaul interface using a GPU-enabled Data Plane Development Kit (DPDK). A brief survey of some of the promising deep learning approaches for L1 and L2enhancements is presented.

Click here to register

Low-Density Parity Check (LDPC) based Advanced Error Correction Coding and 5G

Dr. Francisco García-Herrero, Associate Professor and a Researcher with the Universidad Antonio de Nebrija (2021-09-14)

This lecture gives an overview of HLS tools and evaluation of HLS flow vs. manually optimized traditional RTL design flow for LDPC decoders. Then this talk gives an overview of LDPC implementations for 5G-NR on various platforms such as GPUs, ASICs and FPGAs.

Click here to register


Dr. Kiran Gunnam, Western Digital (2021-09-13)

Low-Density Parity-Check (LDPC) codes are now being used in Hard disk drive read channels, Wireless (IEEE 802.11n/ IEEE 802.11ac, IEEE 802.16e WiMax), 10-GB, DVB-S2, Flash SSD and more recently in 5G-NR cellular radio. This lecture covers Low-Density Parity-Check (LDPC) code based Advanced Error Correction Coding Algorithms and Architectures. LDPC codes now have been firmly established as coding techniques for communication and storage channels. This talk gives an overview of the development of low complexity iterative LDPC solutions for communication channels. Complexity is reduced by developing new or modified algorithms and new hardware architectures.

Click here to register

Modeling and Learning Social Influence from Opinion Dynamics

Dr. Anna Scaglione, Arizona State University (2020-06-25)

Opinion dynamics models aim at capturing the phenomenon of social learning through public discourse. While a functioning society should converge towards common answers, the reality often is characterized by divisions and polarization. This talk reviews the key models that capture social learning and its vulnerabilities. In particular, we review models that explain the effect of bounded confidence and social pressure from zealots (i.e. fake new sources) and show how very simple models can explain the trends observed when social learning is subject to these phenomena. We their influence exposes trust different agents place on each other and introduce new learning algorithms that can estimate how agents influence each other.

Click here to register

Distributed Learning and Signal Processing Algorithms

Dr. Anna Scaglione, Arizona State University (2020-05-26)

Artificial intelligence (AI) today is about developing the capability of a single node to make an inference or to respond to its surroundings with the appropriate action. Inevitably, autonomous systems will evolve to operate in cooperation, as intelligent swarms. This talk will introduce peer to peer algorithms for distributed computation and inference. We will start from the Average Consensus (AC) primitive, its convergence properties over deterministic and random networks and then introduce the Distributed Sub-Gradient (DSG) and the Alternating Direction Method of Multipliers (ADMM) methods. The applications of these algorithms to distributed computation tasks such as hypothesis testing, linear regression, least square approximations, principal component analysis and dictionary learning will be highlighted throughout the talk.

Click here to register

IEEE 1547 and DER interconnections

Dr. Babak Enayati, National Grid, USA (2019-05-03)

Many countries have implemented renewables portfolio standards (RPSs) to accelerate the pace of deployment of renewables generation, which are distributed across the distribution power system. As the penetration of renewable power generation increases, electricity grids are beginning to experience challenges, which are often caused by intermittent nature of some common renewable generation types, sudden changes of the output power due to grid disturbances, low short circuit duty of the inverter-based generators, and impact on the transmission and distribution system protection. Due to the increasing amount of Distributed Energy Resources (DERs) interconnections with the Electric Power System, the IEEE 1547 standard is going through a major revision to address some of the technical challenges associated with high penetration of DERs i.e. grid support functionalities, etc. The participants will learn about the benefits and challenges of the renewable energy resources interconnections as well as major changes to the IEEE 1547 i. e. voltage regulation, response to abnormal system conditions (including voltage and frequency ride through), power quality, islanding, interoperability, etc. The participants will also learn about the utility concerns/solutions to adopt the revised IEEE 1547 standard. National Grid's experience with smart inverters i. e. how to set power factor and Volt/VAR based on the location of the solar facility will also be presented at this session.

Click here to register

Designer Matter: Meta-Material Interactions with Light, Radio Waves and Sound

Dr. Andrea Alù, hotonics Initiative, CUNY Advanced Science Research Center (2019-05-03)

Metamaterials are artificial materials with properties well beyond what offered by nature, providing unprecedented opportunities to tailor and enhance the interaction between waves with materials. In this talk, I discuss our recent research activity in electromagnetics, nano-optics and acoustics, showing how suitably tailored meta-atoms and arrangements of them open exciting venues to manipulate and control waves in unprecedented ways. I will discuss our recent theoretical and experimental results, including metamaterials for scattering suppression, metasurfaces to control wave propagation and radiation, large nonreciprocity without magnetic bias, giant nonlinearities in properly tailored metamaterials and metasurfaces, and active metamaterials. Physical insights into these exotic phenomena, new devices based on these concepts, and their impact on technology will be discussed during the talk.

Click here to register

Nonlinear Filters with Particle Flow

Dr. Frederick Daum, Raytheon Company (2019-05-3)

We have invented a new particle filter, which improves accuracy by several orders of magnitude compared with the extended Kalman filter for difficult nonlinear problems. Our filter runs many orders of magnitude faster than standard particle filters for problems with dimension higher than four. We do not resample particles, and we do not use any proposal density, which is a radical departure from other particle filters. We show very interesting movies of particle flow and many numerical results. The key idea is to compute Bayes’ rule using a flow of particles rather than as a point wise multiplication; this solves the well known problem of “particle degeneracy”. Our derivation is based on freshman calculus and physics. This talk is for normal engineers who do not have log-homotopy for breakfast.

Click here to register

LoRaWAN to Enable IoT at Scale

Mr. Tony Bowden, CTO and Co-Founder of Pansofik, LLC (2018-12-04)

In order to scale the deployment of IoT to billons of devices they must be low cost and simple to install and maintain. LoRaWAN has proven to be extremely power efficient for low bit rate communications while delivering both long range outdoor and deep indoor penetration for in building use cases which are ideal features for many IoT deployment scenarios. This presentation will examine LoRaWAN characteristics along with reviews of several real-world deployment results to show that the technology is a very promising alternative to existing and future cellular based IoT solutions such as CAT-M1 and NB-IOT to enable IoT at scale for fixed location deployments.

Click here to register

From Top Level Design Specifications to Detail Design

Ed Palacio & Bob Lukachinski Lecture, P&L Technical Management Solutions (2018-03-29)

This discussion will focus on the process of breaking down top level system specifications into detail design requirements that an individual designer can address. It will talk about the systematic decomposition and allocation processes, from top-level requirements decomposition, to functional identification and allocation, to functional decomposition and finally to the physical allocation to a design entity… be it for microwave, analog, digital, or signal processing hardware, or FPGA or embedded processor code. It will use the introduction of Direct Digital Synthesizers into historically analog equipment as an example of the process. This discussion is appropriate for both practitioners as well as undergraduate Engineering students.

Click here to register

Describing Function Analysis of Control Systems with Common Nonlinearities

Alan Lipsky, Consultant (2017-10-26)

This is an introduction to the describing functions and its use in analyzing some common nonlinearities. Linear frequency response methods that ignore these nonlinearities fail to predict limit cycle oscillations. The describing function method discussed herein predicts them. Books, at least as far back as 1955, and various contemporary online descriptions contain discussionss of the describing function. This is an account of using the describing function to predict limit cycles. Frequency response methods work well with single input single output linear feedback systems even those that have some form of nonlinearity such as a slightly nonlinear gain. These generally yield to approximation by linear elements. There are, however, some common nonlinearities containing abrupt transitions that fail to yield to linearizing approximations. These often cause limit cycle oscillations. Three such nonlinearities are dead zone, saturation, and the infinite gain limiter. The describing function is based on these assumptions: Circulation of higher harmonics through the loop attenuates them so only a miniscule amount of higher harmonic energy returns to the nonlinearity Nonlinearity produces no subharmonics or DC Nonlinearity in the loop exists in one transfer function.

Click here to register

Cooperative Approaches for Ensuring Secret Wireless Communications

Dr. Athina Petropulu, Rutgers The State University of New Jersey (2017-05-05)

The prevalence of wireless technologies in our daily life is driven by our desire to communicate from anywhere at anytime. However, due to the broadcast nature of the wireless channel wireless communications are easily accessible to intruders. Ensuring the secrecy of confidential transactions conducted over wireless networks is a pressing need. Conventionally, wireless communications are secured using cryptographic protocols, which were mainly developed for wireline networks and as such have several flaws when applied to wireless networks. The talk discusses approaches to establishing a confidential channel between a source and the legitimate destination in the presence of one of more eavesdroppers. The confidential channel is created through the use of multiple antennas at the source, or via node cooperation, whereby nodes reinforce each others' communications and/or also cooperatively jam the eavesdroppers. Thus, the legitimate destinations will reliably receive the communicated information, but eavesdroppers will not be able to decode the communication signal even if they knew the encoding/decoding schemes and encryption-decryption keys used by the transmitter/receiver.

Click here to register

Application of Discrete-Time Statistical Signal Processing: Part 2

Mr. Alan Lipsky, Consultant (2017-03-23)

The concept of probability density and distribution functions are introduced and illustrated with the normal and uniform density functions. The normal density is shown to be a function of its mean and variance only. The notion of a random variable is explained and illustrated. The concept of computing the sample mean is illustrated with a few simple examples such as the average expected from a large number of casts of a die. The sample means and mean square values are further illustrated by deriving the equations for linear regression that minimize the mean square error between the measured data and a straight line. Auto and Cross correlation time functions are defined along with convolution and the unit sample response. For a stationary random process, the equivalence between the ensemble mean, referred to as expectation, and the sample mean is demonstrated. Computation of expectation using the probability density is generalized and illustrated with computation of the mean, mean square value, variance, and correlation. Because most signal processing is in discrete time, wherever possible discussions are illustrated with discrete-time rather than a continuous time independent variable.

Click here to register

Application of Discrete-Time Statistical Signal Processing: Part 1

Mr. Alan Lipsky, Consultant (2017-03-09)

This is an introductory lecture, with no math. It mostly concerns applications of detecting, identifying and interpreting, a signal embedded in a noisy background in speech, image, SONAR, and RADAR processing with Weiner and Kalman filters. Both filters are optimum in minimizing the least squares error in their output signal. Developments in statistical signal processing can be traced back to the early 1800’s when both Gauss and Legendre used the method of least squares to extract a comet’s orbit from noisy measurements. In the 1940’s Norbert Wiener published “Extrapolation. Interpolation and smoothing of stationary time series.” He related a random signal’s power density versus frequency characteristic to its autocorrelation. An optimum filter, that minimizes mean square error, in extracting a signal from noise, is named for him. The next big advance in filtering occurred when Rudolf Kalman published a description of his filter in 1960. This filter updates continuously with a recursive solution that offers a low computational burden, and yields both the signal and systems state. A Kalman filter was in the Apollo navigation computer used by Neil Armstrong to go to the moon, and is in many modern applications, particularly autonomous navigation.

Click here to register

Advanced Techniques of Radar Detection in Non-Gaussian Background

Dr. Maria Sabrina Greco, Associate Professor at University of Pisa (2016-10-18)

For several decades, the Gaussian assumption on the disturbance modeling in radar systems has been widely used to deal with detection problems. But, in modern high-resolution radar systems, the disturbance cannot be modeled as Gaussian distributed and the classical detectors suffer from high losses. In this talk, after a brief description of modern statistical and spectral models for high-resolution clutter, coherent optimum and sub-optimum detectors, designed for such a background, will be presented and their performance analyzed against a non-Gaussian disturbance. Different interpretations of the various detectors are provided that highlight the relationships and the differences among them. After this first part, some discussion will be dedicated to how to make adaptive the detectors, by incorporating a proper estimate of the disturbance covariance matrix. Recent works on Maximum Likelihood and robust covariance matrix estimation have proposed different approaches such as the Approximate ML (or Fixed-Point) Estimator or the M-estimators. These techniques allow to improve the detection performance in terms of false alarm regulation and detection gain in SNR. Some of results with simulated and real recorded data will be shown.

Click here to register

Analyzing Feedback Systems with Signal-Flow Graphs

Alan Lipsky, Consultant (2016-04-26)

Signal-flow graphs facilitate finding transfer functions for linear systems, both mechanical and electrical. They provide an intuitive understanding. Integral differential equations are solved in the Laplace domain. Using Mason’s gain formula the transfer function is found easily from the signal-flow graph. The resulting transfer function is the ratio of polynomials in powers of ‘S’, the complex frequency variable. They are less general than state variable formulations. Since they are useful for solution of linear equation only and don’t consider initial conditions. In contrast with signal-flow diagrams the state variable formulation is ideal for computer solutions of multiple input output systems. Flow graphs, however, yield a better intuitive grasp of the system. Unlike block diagrams, which ignore interactions between the output of one block and the input of the following one, flow-graphs are an accurate representation. In the lecture, the rules for signal flow graphs are introduced and Mason’s gain formula stated. A number of Op-Amp circuits and simple mechanical systems are solved. After stating Bode’s criteria for stability, the graphs are used to illustrate why Op-Amps oscillate with capacitance loads.

Presentation Slides: PDF - Click to View (0.5 MB)

Protection From Lightning

Alan Lipsky, Consultant (2016-02-09)

Lighting strokes range from a few hundred amps to more than 500 KA; their energy spectrum ranges from DC to above 1 Megahertz. Damage is caused by the large current flow or the voltage it causes. Mitigate the effects of lighting with the appropriate grounding techniques, surge protection at various points. There are a variety of surge protectors: gas discharge tubes, crow bars (Thyristors), Metal Oxide Varistor, and Transient Voltage Suppressors. The performance of each is discussed. The first two handle the largest power and are used on the power input to a facility. The latter 2 are more appropriate for equipment protection, MOV’s at the power input and TVS on boards with especially sensitive components. Because of their capacitance and leakage neither are appropriate for signal or communication inputs. A diode circuit for protecting these inputs is shown and discussed. To ensure equipment survival, specification organizations issued standard test-wave-shapes to simulate lighting caused surges. Examples of these wave shapes are presented. Equipment should be both designed and tested to survive these tests.

Presentation Slides: PDF - Click to View (0.4 MB)

Signal Integrity & Routing Considerations for High-Speed Systems

Graham Smith, TE Connectivity (2015-10-29)

The connector-to-board interface regions (footprints) reside on either side of any mated connector pair and are an integral part of system electrical performance. As data transfer rates have increased, footprint and routing design considerations have increased concurrently and have a greater contribution to the overall Signal Integrity performance. Engineers must deal with these increased speeds by refining board designs to accommodate Printed Circuit Board (PCB) manufacturing capabilities while simultaneously finding ways to enhance performance to accommodate increased data rates.

Presentation Slides: PDF - Click to View (1.4 MB)

Enhanced Feedback Robustness Via Scaled Dither

Dr. Lijian Xu, SUNY Farmingdale (2015-04-23)

A new method is introduced to enhance feedback robustness against communication gain uncertainties. The method employs a fundamental property in stochastic differential equations to add a scaled stochastic dither under which tolerable gain uncertainties can be much enlarged, beyond the traditional deterministic optimal gain margin. Algorithms, stability, convergence, and robustness are presented for first-order systems. Extension to higher-dimensional systems is further discussed.

Presentation Slides: PDF - Click View (3.6 MB)

Evolution of Digital Verification

Walter Gude, Mentor Graphics (2013-10-08)

Verification of digital system used to be a pretty straight forward process. “Create a set of test vectors for each feature, apply these vectors and track down any bug.” The relentless march of Moore’s law has caused these traditional methods to breakdown; first in the ASIC world and now increasingly in the FPGA world. This has caused the system test budget to rise dramatically. Tool-based verification at front is always considered the most viable approach to balance the budget, as the statistics show that most of functional bugs could be caught by front-end verification before the physical unit test and system test.

Presentation Slides: PDF - Click to View (7.0 MB)

Reversing Time: A Way to Unravel Distorted Communications?

James V. Candy, Lawrence Livermore National Laboratory (2012-10-10)

Communicating in a complex environment is a daunting problem. Such an environment can be a hostile urban setting populated with a multitude of buildings and vehicles, the simple complexity of a large number of sound sources that are common in the stock exchange, or military operations in an environment with a topographic features hills, valleys, mountains, etc. These inherent obstructions cause transmitted sounds or signals to bounce (reflect) and bend (refract) and spread (disperse) in a multitude of directions distorting both their shape and arrival times at the targeted receiver locations. Time-reversal is a simple notion that we have all observed (in a sense) when viewing a movie of the demolition of a building, for example. Merely running the movie in-reverse or equivalently running it backwards in time allows us to reconstruct the building at least visually; even though it cannot be reconstructed physically. Using this same idea, time-reversal can be applied to “reconstruct” communication signals by retracing all of the multiple paths that originally distorted the transmitted signals in the first place! In order to separate or decompose the individual components of the message, the receiver must use its knowledge of the medium to not only separate each path but also to add them together in some coherent manner to extract the message with little or no distortion and increase their signal levels.

Presentation Slides: PDF - Click to View (3.0 MB)

Hardware Verification for Avionics & Safety Critical Design

Modesto Casas, Aldec (2012-05-23)

The common problems associated during hardware testing of complex FPGAs in Safety Critical Designs are explored, in addition to the time savings attainable by re-using the simulation test-bench as test vectors to perform in-hardware verification at speed. A set of tasks and time required for in-hardware FPGA testing under the DO-254 Design Assurance Guidance for Airborne Electronic Hardware is presented and two methods of verification are contrasted. Traditionally, hardware verification is performed at the board level which contains the FPGA under test as its primary component. The FPGA is also interconnected with other components on the board, and with the lack of test headers, visibility and controllability at the FPGA pin level is limited. At times the board may contain multiple FPGAs, further complicating the verification problem. Verification at the board level without first stabilizing each FPGA individually can lead to many problems and longer project delays. The methodology discussed is based on a bit-accurate in-hardware verification platform that is able to verify and trace the same FPGA level requirements from RTL to the target device at full speed, while saving time and resources.

Presentation Slides: PDF - Click to View (4.0 MB)

Target Detection Using Optical Joint Transform Correlation

Dr. M. Nazrul Islam, SUNY Farmingdale (2011-11-30)

Automatic identification of a specific object or pattern in an arbitrary input scene is an important part of any authorization, monitoring and security system. Pattern recognition is always a challenging issue because the targets are often non-cooperative; the scene may contain noise and distortions due to variable environmental conditions during recording the image. Additional requirements for an efficient pattern recognition system are that the architecture should be simple so that it can easily be implemented and be user friendly, and it should perform fast enough to make instantaneous decision on the presence of a target in the input scene. Optical joint transform correlation (JTC) technique has been found to be a versatile tool for real-time pattern recognition applications, which employs optical devices, like lens, spatial light modulator, for parallel processing of the given images. The JTC scheme provides a number of advantages over other correlation techniques, like Vanderlugt filter, in that it allows real-time updating of the reference image, permits parallel Fourier transformation of the reference image and input scene, operates at video frame rates and eliminates the precise positioning requirement of a complex matched filter in the Fourier plane. Several modifications have been proposed to improve the correlation performance of the JTC technique, namely binary JTC, phase-only JTC, fringe-adjusted JTC and shifted phase-encoded fringe-adjusted JTC. This presentation will review the features, problems and prospects of optical pattern recognition techniques.

Presentation Slides: PDF - Click to View (3.8 MB)

Tapping the TeraFLOP Potential of GP-GPU

Brooks Moses, Gil Ettinger, Eran Strod, Mentor Graphics, Sensor Exploitation, Curtiss-Wright (2011-06-15)

High performance image and signal processing applications are significantly benefiting from GP-GPU technology to extract meaningful information from large volumes of rich data sources. This seminar provides an overview of GP-GPU technology and how it can be expected to perform in image and signal processing applications such as target tracking. In addition, we discuss how this technology, which was developed for desktop computing, can be adapted to rugged environmental conditions that are typical of military and aerospace applications.

Presentation Slides: PDF - Click to View (1.9 MB)

Digital Signal Processing For Radar Applications

Michael Parker, Benjamin Esposito, Altera (2011-03-15)

This seminar features a space-time adaptive processing (STAP) pulsed Doppler Radar simulation using back-end FPGA implementation including: model of a Radar system environment, optimized implementation of STAP back-end processing and FPGA Implementation. Solutions are presented to address challenges often faced by Radar system and implementation engineers. The methodology and tools presented model and simulate systems and algorithms at a high level of abstraction, allow rapid exploration of design options (“what-if” scenarios), while efficiently and optimally implementing designs in FPGAs and ASICs.

Presentation Slides: PDF - Click View (4.0 MB)

Mapping DSP Algorithms Into FPGAs

Sean Gallagher, Xilinx (2010-11-02)

FPGAs have been used to craft massively parallel custom computing machines since the early 1990s’ and since 2002 they have included embedded multipliers and adders. The next generation of the largest FPGAs from Xilinx, will have an equivalent gate count in the millions and close to 4000 embedded multipliers and adders. The sheer quantity of multipliers and adders allows the designer to build many high throughput DSP functions like digital down-conversion circuits, FFTs, channelizers, etc. However for low throughput requirements it is also possible to use a smaller FPGA device and over-clock (time share) the FPGA resources so as to require less of them. This presentation explores implementation options for efficiently building DSP algorithms like parallel FFTs, channelizers, filters, etc.

Presentation Slides: PDF - Click to View (1.0 MB)

Extending Laplace & Fourier Transforms: A Personal Perspective

Dr. Shervin Erfani, University of Windsor (2007-05-15)

The classical theory of variable systems is based on the solutions of linear ordinary differential equations with varying coefficients. The varying coefficients are usually functions of an independent variable, so-called the time variable. The “time variable” is assumed to be a real variable for physical systems. This assumption facilitates analysis and synthesis of fixed (so-called time-invariant) systems by allowing the Laplace transform techniques to be used. However, the assumption of “real time” is shown to be inadequate for realization of time-varying systems in the transformed domain. The discussion in this presentation is based on a different point of view. Specifically, the approach consists essentially in investigating the possibility of system realization through an examination of the behavior of systems that are functions of a complex time-variable. This approach allows, in effect, a two-dimensional Laplace transform technique to be used for the time-varying systems in the same manner that the conventional frequency-domain technique is used in connection with fixed systems. The challenge is the physical interpretation of a “complex time variable” versus the “real time,” and its implications on the transformed variable, so-called the “frequency variable.”

Presentation Slides: PDF - Click to View (0.3 MB)

A Self-Coherence Based A Self-Coherence Based Anti-Jamming GPS Receiver

Moeness Amin, Villanova University (2005-03-10)

Despite the ever-increasing civilian applications, the main drawback of GPS remains to be its high sensibility to multi-path and interference. The effect of interference on the GPS receiver is to reduce the signal-to-noise ratio (SNR) of the GPS signal such that the receiver is unable to obtain measurements from the GPS satellite. The spread-spectrum (SS) scheme, which underlies the GPS signal structure, provides a certain degree of protection against interference. However, when the interferer power becomes much stronger than the signal power, the spreading gain alone is insufficient to yield any meaningful information. This talk discusses a new technique for anti-jam Global Positioning System (GPS). A novel GPS anti-jam receiver using multi-antenna receivers is introduced which relies on the replications of the coarse/acquisition (C/A) code within a GPS symbol. The proposed receiver utilizes the inherent GPS self-coherence property to excise narrowband and broadband interferers that have different temporal structures from that of the GPS signals.

Presentation Slides: PDF - Click to View (1.0 MB)

Evolution of 3G Wireless Systems

Ariela Zeira, InterDigital (2003-05-14)

Third generation wireless communication systems were introduced to extend the data capabilities of second-generation systems by providing quality of service (QoS) management and enabling the high data rates required for high speed web access, and transmission/reception of high quality images and video. To satisfy predicted future increasing demands on even higher rate data services, additional enhancements are being incorporated into the different 3G air interface standards. The first step in evolving the 3G standards is enabling high-speed packet access in the downlink or forward link, i.e. when the terminal is receiving information from the network. The higher data rates are achieved via new features such as adaptive modulation and coding, Hybrid ARQ and fast scheduling. Other enhancements that are being considered are extending the high-speed packet access to the uplink or reverse link (when the terminal is transmitting information to the network) and smart antenna techniques. In this talk we will review the new features recently introduced or being considered for 3G air interfaces and discusses their impact on the performance of the evolving standards.

Presentation Slides: PDF - Click to View (1.0 MB)