audio

Zynaptiq Acquires The DSP Dimension

Posted by neuronaut on February 03, 2014
News / Comments Off on Zynaptiq Acquires The DSP Dimension

Hannover, February 3rd, 2014 — Zynaptiq GmbH announces the acquisition of The DSP Dimension and all of its assets and technologies.

Founded in 2005 by Stephan Bernsee as audio and image technology research and development firm, The DSP Dimension is widely known for its critically acclaimed time stretching technology, its feature-based RTmorph sound morphing software, as well as its cutting edge audio, speech, speaker, gesture and face recognition technologies. The DSP Dimension web site has also become a widely popular repository for various tutorials on audio signal processing written to explain digital signal processing techniques from an engineer’s hands-on perspective with a minimum amount of math.

Zynaptiq will continue to offer all available The DSP Dimension technologies for licensing and The DSP Dimension team will continue to provide tech support under the new owner.

“The acquisition of The DSP Dimension extends our business scope and product range significantly” says Denis Goekdag, CEO at Zynaptiq. “Integrating these assets with our existing IP opens up exciting new opportunities for signal acquisition and processing, that our users and clients will benefit from greatly in the near future”, adds Stephan Bernsee, founder of The DSP Dimension and CTO at Zynaptiq.

Zynaptiq GmbH, based in Hannover (Germany) creates audio software based on artificial intelligence technology and is known for their award-winning UNVEIL reverb removal, UNFILTER adaptive equalization and PITCHMAP real-time polyphonic pitch processing plug-ins.

For more information, please visit the Zynaptiq web site at

http://www.zynaptiq.com

Tags: ,

Tom Zicarelli Releases AudioGraph (iOS)

Posted by neuronaut on December 08, 2011
News / Comments Off on Tom Zicarelli Releases AudioGraph (iOS)

Mainz, Germany, December 8th, 2011 — Software developer Tom Zicarelli has released an open source iOS audio processing graph demonstration called “audioGraph”. According to the author the work is in part based on the DSP Dimension pitch shifting tutorial.

You can find more information on the software at http://zerokidz.com/audiograph and a demonstration video at http://www.youtube.com/watch?v=ApJktPS02-Q

Get the code from https://github.com/tkzic/audiograph

Tags: , , , , , ,

ActionScript3 Version of smbPitchShift Available

Posted by neuronaut on August 27, 2009
News / Comments Off on ActionScript3 Version of smbPitchShift Available

Mainz, August 27th, 2009 – Arnaud Gatouillat has posted an ActionScript3 port of our smbPitchShift code from the “Pitch Shifting Using the Fourier Transform” tutorial on his web site at http://iq12.com/blog/2009/08/25/real-time-pitch-shifting/. If you have Flash 10 installed on your computer you can run it directly off his web site in real time. Source code is also available.

Tags: , , , , , , ,

Multi Component Particle Transform Synthesis

Posted by neuronaut on March 17, 2004
Uncategorized / Comments Off on Multi Component Particle Transform Synthesis

The <Neuron> audio synthesis engine uses what is called a „Multi Component Particle Transform Synthesis“, a form of synthesis which could be best described as „adaptive Resynthesis”. To reproduce sounds electronically with this synthesis it is necessary to first analyze the audio signal and convert it in a form usable by the Neuron Audio System. In the case of the <Neuron> synthesizer, this is done in a separate cross-platform application called the „Model Maker”. With this software application, sampled sounds are converted into a synthesis model the <Neuron> can utilize to resynthesize sounds.

Resynthesis – Historical Background

The concept of „Resynthesis” was introduced in the 1980s to describe a form of sound synthesis that builds an arbitrary sound from a sum of sine tones of different frequencies and amplitudes. Several hundred sine tones were mixed at appropriate frequencies and amplitudes to recreate the spectrum of an analyzed sound. This was a very clever concept: sine tones are simple trigonometric building blocks that are relatively easy to generate using analogue and digital oscillators and they perfectly represent the notion of „frequency”, since any such tone has one single, constant frequency.

 

The Hartmann <Neuron> Synthesizer introduced in 2003 is still one of the most unusual synthesizers today. It has been awarded several prestiguous prices for innovation and outstanding design. Developed by leading industry designer Axel Hartmann using sound synthesis technology from our company this synthesizer is a much sought after piece of equipment, even years after the Hartmann company has gone out of business in 2005 over a dispute with their worldwide distributor.

This concept, on the other hand had to struggle with severe disadvantages. Aside from being almost unusable in practice (no human being can operate a synthesizer that requires controlling several hundred of sine tones at a time) it had inherent technical problems: sine tones are periodic (after a certain amount of time, they repeat their shape) and have no localization in time (they do not have a beginning and an end). Therefore, they are perfectly suited to synthesizing periodic signals that have a constant frequency, but it is difficult to resynthesize signals that have a short duration and are impulsive in nature. In most sounds and music, both types of signals can occur, either sequentially or in combination. This is the reason why most resynthesized signals sound diffuse and reverberant, they suffer from an effect called „phasiness”.

 

One step further

However, there are other forms of resynthesis that do not rely on sine tones as basic building blocks. A popular example would be the Wavelet Transform, which does not use sine tones as „basis”, but rather more complex animals called „Wavelets”.

In practice, there is an almost infinite amount of different bases that allow building complex sounds by adding, translating, scaling and convolving them. Each of these bases has its own unique properties, along with its own unique advantages and disadvantages.
You may have encountered the popular picture that shows how one could construct a sawtooth wave by adding different sinewaves. In this process, one resynthesizes a complex sawtooth wave by adding sinewaves of different frequency and amplitude. The sawtooth wave is build from a sine basis.

 

Of course, you could also do the reverse. Based on adding a bunch of sawtooth waves of different amplitudes and frequencies, you could resynthesize a simple sine wave.

Illustration: Sinewaves can be used to build a more complex waveform like a sawtooth wave, but sawtooth waveforms can in turn be used to resynthesize a simple sinewave. From top to bottom, more waves are used at each stage. The left sequence builds a sawtooth wave from sine waves, the right sequence builds a sine wave from sawtooth waves. Click image to enlarge.

If we adapt the basis used for resynthesis to the sound that is to be resynthesized and its inherent sonic qualities we get a very efficient representation of that sound which is also custom tailored to its sonic qualities.

Custom tailored

The Neuron Audio System uses a type of resynthesis where the synthesis itself is adapted to the sound. The basis function(s) used to resynthesize this sound are determined by evaluating its underlying model. It is this concept that allows not only a faithful reproduction but also a vast variety of modifications of the basic instrument.

To synthesize the sound, the underlying model of the sound generator has to be estimated from and adapted to the sonic qualities of the original sampled sound. The Neuron System analyzes both the type of vibrating medium as well as the resonant corpus used to shape the sound. This process is computationally very demanding, which is why it is done at the analysis stage. A pattern recognition process based on Artificial Neural Networks is used to create a model from the sound. If the sound cannot be uniquely represented by a single model (which may be the case for drum loops and complete musical pieces) it is constructed from a time variable morph between different models representing the sound at that instant.

After the pattern recognition stage, the basic model is subsequently refined and its parameters are adjusted in a way to reproduce the original sound as accurately as possible. The accuracy is determined by the „Complexity parameter” inside the ModelMaker application. Complex models are built from a set of simpler models, up to the limit set by the complexity constraint. Basic properties such as the size of the resonant body, material and esasticity (determined and accessed through the so-called Parameter set) can be influenced and combined from other models during the resynthesis.

Tags: , ,

Prosoniq Morph – Behind the scenes

Posted by neuronaut on November 01, 2003
Interviews / Comments Off on Prosoniq Morph – Behind the scenes

Stephan M. Bernsee talks about developing the Prosoniq morph plug in

by Reinhard G. Benedikt November 1st, 2003, translated by Eva Remowa

 

How did you come to develop the morph plug in?
Well, I began thinking about such an application shortly after I visited Peter Gabriel at Realworld studios where I was introducing the Neuron synthesizer. They needed a way to seamlessly morph from one sound to another for a television production they were working on. It turned out that the Neuron was capable of doing this, but it is certainly not a good idea to purchase a Mercedes when you just need an easy way to get from point A to point B. So I figured that many people might need such a tool and so we began tossing around the idea of doing a real time morphing plug in. As it turns out, the Prosoniq morph has the potential to be one of our most successful products. Continue reading…

Tags: , , , ,

Pitch Shifting Using The Fourier Transform

Posted by neuronaut on September 21, 1999
Tutorials / 51 Comments

cubeWith the increasing speed of todays desktop computer systems, a growing number of computationally intense tasks such as computing the Fourier transform of a sampled audio signal have become available to a broad base of users. Being a process traditionally implemented on dedicated DSP systems or rather powerful computers only available to a limited number of people, the Fourier transform can today be computed in real time on almost all average computer systems. Continue reading…

Tags: , , , , , , , , , ,

The DFT “à Pied”: Mastering The Fourier Transform in One Day

Posted by neuronaut on September 21, 1999
Tutorials / 65 Comments

cubeIf you’re into signal processing, you will no doubt say that the headline is a very tall claim. I would second this. Of course you can’t learn all the bells and whistles of the Fourier transform in one day without practising and repeating and eventually delving into the maths. However, this online course will provide you with the basic knowledge of how the Fourier transform works, why it works and why it can be very simple to comprehend when we’re using a somewhat unconventional approach. Continue reading…

Tags: , , , , , , , , ,

Time Stretching And Pitch Shifting of Audio Signals – An Overview

Posted by neuronaut on August 18, 1999
Tutorials / 20 Comments

cubeThis tutorial gives a brief overview of the most popular algorithms used for achieving time stretching and pitch shifting in a musical context, along with their advantages and disadvantages. We provide audio examples to demonstrate common artifacts and effects associated with these processes, and provide pointers to papers and other resources on the net. Continue reading…

Tags: , , , , , , , , , , , , ,