Posted on

Plugin Update (October 2024)

I’m really happy to announce another update of the aX Plugins. There are some bigger and smaller updates to all plugins. These updates add new features, make the plugins faster and easier to use and refresh the interface.

You can download the updated versions from your account.

aXPanner – New Feature!

Following user requests, aXPanner now has a new resizable pop-out window that you can use to overlay on 360 videos or images. It also gathers all instances of the plugin in one place so you can get an overview of where all the other tracks in the project have been panned.

Each marker displays the name of the plugin’s track, so you always know what you’re working on. It also takes the colour your set the track in your DAW and uses that for the marker to make things even clearer and more convenient. You can also manually select the marker colour yourself, for extra customisation.

A screenshot of aXPanner with the new pop-out window open. The azimuth and elevation of all four tracks in the project can be controlled from one window.

aXCompressor and aXGate – Better Visualisation

aXCompressor and aXGate now have new gain-reduction meters. As well as just looking a bit nicer, they now show the input and output levels of the signals so you can get a really clear idea of just how much gain reduction you’re applying.

aXCompressor - Tenth order Ambisonics compressor plugin

All Plugins – Pro Tools Automation Shortcut

This was another user requested feature and is a quality-of-life improvement for Pro Tools users: All plugins now support the Ctrl+Win+Click (Windows) and Ctrl+Cmd+Click (Mac) shortcut to open the automation lane automatically.

To use it you have to do one of two things:

  • Enable each parameter individually using the following shortcuts while clicking the parameter you want to automate: Ctrl+Win+Alt+Click (Windows) or Ctrl+Cmd+Alt+Click. This brings up a pop-up that lets you enable the parameter. Now you can bring up the automation lane using Ctrl+Win+Click (Windows) or Ctrl+Cmd+Click (Mac)
  • (Preferred) Enable “Plug-in Control Default to Auto-Enabled” in Preferences > Mixing in Pro Tools. Any new plugins you insert will have automation controls already enabled.

All Plugins – GUI Refresh

All of the plugins have had a light GUI refresh to make them a little sleeker and more modern looking. It’s subtle but I think makes a big improvement. Can you spot the changes?

What Do You Want To See?

Do you have any features you’d like to see in a future version of the plugins? Some small thing that would make your life easier? A cool feature that would make working with them even more fun? If so, get in touch with me and I’ll do my best to make sure it happens!

Posted on

Now Supporting Tenth-Order Ambisonics

It has been six years since I released the first version of the a7, a3 and a1 plugin suites, for (up-to) seventh-, third- and first-order processing respectively. The time has absolutely flown by. I’d like to thank everyone who has used my plugins, shared their amazing creative projects with me, and provided feedback that continues to make the plugins better. There is plenty more to come in the future!

Today, however, I am happy to announce another big update, as the a7 suite evolves to become the aX suite.

aX isn’t just a new name. The new update of aX plugins are able to process up-to tenth-order Ambisonics! Going forward, I aim to have the aX version of the plugins support the highest order in each DAW for maximum spatial resolution.

As a re-introduction bonus, the aX tenth-order version of the plugins is on sale with a 30% discount until 11th February 2024!

Tenth-order is a lot. Why go above seventh-order?

The amount of error in binaural decoders of different orders as a function of frequency. Tenth-order has the lowest error across more of the spectrum.

In many cases seventh-order will be enough. Often, even third-order could be enough. It all depends on your signal flow and maybe the final order you are targeting (though I’d argue it’s always best to work with the highest order possible).

The main benefit of tenth-order processing is for binaural processing or for very large loudspeaker layouts. For some Ambisonics-to-loudspeaker decoders higher orders can also be beneficial when working with smaller irregular layouts. That’s something I hope to come back to in a later post.

However, most people can’t fit a giant loudspeaker layout (or even 7.1.4) in their living room that leaves binaural as a way to experience immersive audio. How we perceive binaural decoding depends mainly on three things: the HRTF we are using, the order of the ambisonic signal, and the method used to create the decoding filters. Let’s focus on the impact of the order to see what going up to tenth-order gets us.

The graph shows the amount of error in the binaurally decoded signal as a function of frequency. For the first-order (1OA) decoder the error starts to rise even below 1000 Hz. Third-order gets us above 1000 Hz and you can usually hear a huge difference by going from 1OA to 3OA. For tenth-order, the error doesn’t rise significantly until nearly 4000 Hz. This means that if you are using a custom .SOFA HRTF in aXMonitor then you are going to get holophonic scene reproduction over even more of the spectrum.

That’s a great reason to use the maximum order possible when listening binaurally!

How can I work with tenth-order Ambisonics?

The recent Reaper 7 update expanded the number of channels-per-track to a whopping 128. Tenth-order Ambisonics needs 121 channels, as opposed to seventh-order which needs 64. All you need to do is load the plugins on a track and set the number of channels to at least 121. It’s exactly the same process you would have used for seventh-order. Other than that, using the aX plugins hasn’t changed.

I work with Pro Tools/Cubase/Nuendo (any DAW except Reaper). Can I use tenth-order?

No, unfortunately not. Even though the aX plugins support tenth-order internally, the DAW they loaded in has to have enough channels-per-track to support it. Pro Tools Ultimate has seventh-order ambisonic buses so you can continue to use the new aX suite just like you did the a7 suite.

I bought the a7 suite. Do I have to pay for the new aX versions?

No! The a7 plugins have evolved into the aX plugins, they are not a completely new product. It means that once you update your plugins you will have access to tenth-order ambisonic processing when using Reaper 7.

I bought the a1/a3 suite and would like to upgrade to the aX suite. Do I have to pay full price?

No! If you have already bought any of the a1 or a3 plugins or bundles then you can benefit from the upgrade policy. Just get in touch and I will send you a discount code that removes the cost of the product you have already bought from the purchase of the equivalent aX plugin/bundle.

I don’t need tenth-order… Lower orders are enough for me.

Absolutely! There are plenty of cases where lower order processing is more than enough. If you’re recording with an ambisonic microphone, for example, then you are limited to the order of your microphone. Don’t worry, the a1 and a3 suites are still available at a lower price than the aX plugins.

125.00 excl. VAT

Add to cart

275.00 excl. VAT

Add to cart

450.00 excl. VAT

Add to cart

Posted on

New Research: Adapting to HRTFs

I spend most of my time working on my plugins or developing new tools for clients to use in their projects and products. But sometimes I have the chance to be involved in fundamental research.

A paper I co-authored with Brian Katz (Sorbonne Université, Paris) and Lorenzo Picinali (Imperial College London) was published earlier this year in Scientific Reports by Nature. If you want to read the paper, head over here – it’s Open Access so you can read it for free!

The title is: Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning.

The paper looked at how well people can adapt to an HRTF over time with training. We then looked to see if, over time and without training, they would retain the localisation abilities they had gained. The “twist” was that we gave subjects an HRTF that was initially badly rated for them. We did this in order to investigate the worst-case scenario for content distributed without HRTF choice.

Studies like this are important for spatial and immersive audio because it still seems like it will be a while before consumers can have customised HRTFs. This means there will always be some people listening through an HRTF that is not well suited to them. If we can find ways to adapt users to these HRTFs then we can go some of the way to alleviating this problem.

Reference

Stitt, P., Picinali, L., & Katz, B. F. (2019). Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning. Scientific reports, 9(1), 1063.

Abstract

This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.

Posted on

aXMonitor Update: Personalised Binaural with SOFA Support

The aXMonitor plugins are today updated to version 1.3.2. If you have already bought one of the aXMonitor plugins, you can download the update from your account.  You should remove any old versions of the plugin from your system to avoid any conflicts.

Today’s update is all about getting more flexibility and personalisation for binaural rendering of Ambisoinics. This is probably the most requested feature update for any of my plugins, so I am very happy to be able to announce the new feature:

  • Load an HRTF stored in a .SOFA file for custom binaural rendering.

This allows you to produce binaural rendering for up to seventh order Ambisonics with whatever HRTF you want, providing you with the flexibility you need to produce the highest quality spatial audio content possible.

If you aren’t sure why so many people want personal HRTF support, keep reading.

Advantages of Personalised Binaural

Binaural 3D audio can be vastly improved by listening with a personalised HRTF (head related transfer function). It’s the auditory equivalent of wearing someone else’s glasses vs wearing your own. Sure, you can see most of what is going on with someone else’s glass, but you lose detail and precision. Wear your own and everything comes into focus!

With that in mind, the aXMonitor plugins have been updated to allow you to load a custom HRTF that is stored in a .SOFA file. Now you can use your own individual HRTF (if you have it) or one that you know works well for you. Once an HRTF has been loaded it will be available across to all instances of the plugin in other projects.

What is a .SOFA file?

A .SOFA file contains a lot of information about a measured HRTF (though it can be used for other things as well). You can read more about them here.

Where to get custom HRTFs

You can find a curated list of .SOFA databases here. The best thing to do is to try a few of them until you find one that gives you an accurate perception of the sound source directions. Pay particular attention to the elevation and front-back confusions, since these are what personalised HRTFs help most with.

If you want an HRTF that fits your head/ears exactly then your options are bit more limited. Either you can find somewhere, usually an academic research institute, that has an anechoic chamber and the appropriate equipment. Then you put some microphones in your ears and sit still for 20-120 minutes (depending on their system). Once it’s done, you have your HRTF!

But if you don’t fancy going to all of that trouble, there are some options for getting a personalised HRTF more easily. A method by 3D Sound Labs requires only a small number of photographs and they claim good results. Finnish company IDA also offers a similar service.

Get the aXMonitor

So if you like the sound of customised binaural rendering then you can purchase the aXMonitor from my online shop. Doing so will help support independent development of tools for spatial audio.


a1Monitor


a3Monitor


a7Monitor

Posted on

aXMonitor Update: Google Resonance Audio HRTFs

Today the aXMonitor plugins get their first major update to version 1.2.2. There are two major updates and one minor updates. Let’s start with the major updates:

  • The HRTFs used for binaural 3D sound have been regenerated using Google’s own Resonance Audio toolkit for VR audio. These are the same HRTFs used by Google in YouTube 360. The code released by Google is only up to 5th order, but was actually quite simple to extend to 7th order.
  • A gain control has been added to boost or cut the overall level for convenience.

The minor update is a fix to make sure the plugin reports the correct latency to the host when using the Binaural or UHJ Super Stereo (FIR) methods.

Google have just open sourced their Resonance Audio SDK, including all sorts of tools for spatial audio rendering. This update to ensures that you can aXMonitor ensures that you can mix your content on HRTFs that will be widely used across the industry.

The aXMonitor is available in 3 versions, providing up to first, third and seventh order Ambisonics-to-binaural decoding.

So if you’d like to start mixing your VR/AR/MR audio content just head over to my store. With your support, I can continue to update the aX Ambisonics Plugins to bring you the tools you want and need.


a1Monitor


a3Monitor


a7Monitor

Posted on

aXRotate Update to v1.2.0: Now With Head Tracking!


The aXRotate plugin receives an update today to version 1.2.0 and it’s a big one!  What’s more, it now comes as a (Universal Binary) AudioUnit format for Mac!

If you have already bought it, you can download the update from the download section of your Account page. If you haven’t, you can pick it up at my online shop!


a1Rotate


a3Rotate


a7Rotate

Version 1.0.0 was a plain vanilla Ambisonics rotation with yaw, pitch and roll control. Version 1.2.0 adds two new features that massively increase its usefulness:

  • Get head tracking by connect an EDTracker module.
  • Increase the spaciousness of your static binaural mixes by adding micro oscillations to the sound scene.

Let’s go into both of these new features in a bit more detail.
Continue reading aXRotate Update to v1.2.0: Now With Head Tracking!

Posted on

Introducing the aX Ambisonics Plugins

aXCompressor - Tenth order Ambisonics compressor plugin

Today I am very happy to be releasing my latest work: the aX Ambisonics plugins. They are the result of a lot of work and it is great to be able to finally release them into the world.

The aX Plugins are a set of VST plugins intended to make your work with spatial and immersive audio that little bit easier. They come in three variations each with equivalent plugins – a1, a3 and a7.

Which one you choose will depend on the level of spatial resolution you need for your project (how accurately the spatial properties are reproduced to the final listener). The different levels are known in the Ambisonics world as the order and can theoretically go to infinity. In practice we can (thankfully!) stop somewhere quite a bit before infinity! The aX Plugins give you a choice between basic, advanced and future-proof version.

What are the plugins and what can they do?

There are currently seven plugins in each suite with a different purpose. Here is a quick summary:

  1. aXPanner – a stereo to Ambisonics encoder to bring your sounds into the spatial domain.
  2. aXRotate – this plugin will let you rotate a single track or a full sound scene to make sure you have everything exactly where you want it.
  3. aXMonitor – Ambisonics needs a decoder to be listened to. This plugin decodes to binaural 3D audio (over headphones) or to standard stereo. This means you can always share your creativity via traditional channels.
  4. aXCompressor – Ambisonics requires careful handling of the audio to avoid changing the spatial balance. aXCompressor lets you compress the signal without alteration.
  5. aXGate – simiarly, this plugin acts as a noise gate and downwards expander while preserving the spatial fidelity.
  6. aXEqualizer – safely sculpt the tone of your signals.
  7. aXDelay – get creative with five independent delay modules that can be rotated independently of the original signal.

I will be doing a series of posts going into more detail about each plugin. You can also get more information on the product pages. In the meantime, if you are curious, you can download demo versions of these plugins (for evaluation purposes only) here and if you like them you can support future development by buying them from the shop. Thanks!

a1Monitor - First order Ambisonics stereo and binaural decoding plugin
a3Equalizer - Third order Ambisonics EQ plugin
a7Compressor- Seventh order Ambisonics compressor plugin
a7Delay - Seventh order Ambisonics delay plugin
a3Gate - Third order Ambisonics gate and downwards expander plugin
a3Monitor - Third order Ambisonics stereo and binaural rendering plugin
a3Panner - Third order Ambisonics encoder/panner plugin
a1Delay - First order Ambisonics delay plugin
a1Equalizer- First order Ambisonics equalizer plugin
a1Gate - First order Ambisonics gate and downwards expander plugin
a1Rotate - First order Ambisonics rotator plugin
a1Compressor - First order Ambisonics compressor plugin
a3Compressor- Third order Ambisonics compressor plugin
a3Rotate - Third order Ambisonics rotate plugin
a3Delay - Third order Ambisonics delay plugin
a7Panner - Seventh order Ambisonics encoder/panner plugin
a7Monitor - Seventh order Ambisonics stereo and binaural rendering plugin
a7Gate - Seventh order Ambisonics gate and downwards expander plugin
a7Equalizer - Seventh order Ambisonics EQ plugin
a7Rotate - Seventh order Ambisonics rotate plugin
Posted on

What’s Missing From Your 3D Sound Toolbox?

Audio for VR/AR is getting a lot of attention these days, now that people are realising how essential good spatial audio is for an immersive experience. But we still don’t have as many tools as are available for stereo. Not even close!

This is because Ambisonics has to handled carefully when processing in order to keep the correct spatial effect – even a small phase change between channels significantly alter the spatial effect – so there are very few plugins that can be used after the sound has been encoded.

To avoid this problem we can apply effects and processing before spatial encoding, but then we are restricted in what we can do and how we can place it. It is also not an option if you are using an Ambisonics microphone (such as the SoundField, Tetra Mic or AMBEO VR), because it is already encoded! We need to be able to process Ambisonics channels directly without destroying the spatial effect.

So, what is missing from your 3D sound toolbox? Is there a plugin that you would reach for in stereo that doesn’t exist for spatial audio? Maybe you want to take advantage of the additional spatial dimensions but don’t have a tool to help you do that. Whatever you need, I am interested in hearing about it. I have a number of plugins that will be available soon that will fulfil some technical and creative requirements, but there can always be more! In fact, I’ve already released the first one for free. I am particularly interested in creative tools that would be applied after encoding but before decoding.

With that in mind, I am asking what you would like to see that doesn’t exist. If you are the first person to suggest an idea (either via the form or in the comments) and I am able to make it into a plugin then you’ll get a free copy! There is plenty of work to do to get spatial audio tools to the level of stereo but, with your help, I want to make a start.

Posted on

Ambisonics to Stereo Comparison

In my last post I detailed two methods of converting Ambisonics to stereo. Equations and graphs are all very good, but there’s nothing better than being able to listen and compare for yourself when it comes to spatial audio.

With that in mind, I’ve made a video comparing different first-order Ambisonics to stereo decoding methods. I used some (work-in-progress) VST plugins I’m working on for the encoding and decoding. I recommend watching the video with the highest quality setting to best hear the difference between the decoders.

There are 4 different decoders:

  • Cardioid decoder (mid-side decoding)
  • UHJ (IIR) – UHJ stereo decoding implemented with an infinite impulse response filter.
  • UHJ (FIR) – UHJ stereo decoding using a finite impulse response filter.
  • Binaural – Using the Google HRTF.

The cardioid decoder more quickly moves to, and sticks in, the left and right channels as the source moves, while this is more gradual with the UHJ decoder. To me, the UHJ decoding is much smoother than the cardioid, making it perhaps a bit easier to get a nice left-right distribution that uses all of the space, while cardioid leads to some bunching at the extremes.

The binaural has more externalisation but pretty significant colouration changes compared to UHJ and cardioid decoding, but also potentially allows some perception of height, which the others don’t.

The VSTs in the video are part of a set I’ve been working on that should be available some time in 2018. If you’re interested in getting updates about when they’re release, sign up here:

[mc4wp_form id=”96″]

Posted on

Better Externalisation with Binaural

Some research that I was involved in was published last week in the Journal of the Audio Engineering Society [1]. You can download it from the JAES e-library here. The research was led by Etienne Hendrickx (currently at Université de Bretagne Occidentale) and was a follow on from other work we did together on head-tracking with dynamic binaural rendering [2, 3, 4].

The new study looked at externalisation (the perception that a sound played over headphones is emanating from the real work, not inside the listener’s head). It specifically investigated the worst case scenario for externalisation – sound sources directly in-front of ($0^{circ}$) or behind the listener ($180^{circ}$). It tested the benefit of listeners moving their head, as well as listeners keeping their head still and the binaural source following a “head movement-like” trajectory. Both were found to give some improvement to the perceived externalisation, with head movement providing the most improvement.

The fact that source movements can improve externalisation is important because we don’t always have head tracking systems. Most people will experience binaural with normal headphones. This hints at a direction for some “calibration” to help the listener get immersed in the scene, improving their overall experience.

Also importantly, the listeners used in the study were all new to listening to binaural content. This is important because lots of previous studies use expert listeners, but the vast majority of real-world listeners are not experts! The results of this paper are encouraging because they show that you don’t need hours of listening to binaural to benefit from some instant perceptual improvement in a fairly easy manner.

References

[1] E. Hendrickx, P. Stitt, J. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Improvement of Externalization by Listener and Source Movement Using a ‘Binauralized’ Microphone Array,’” J. Audio Eng. Soc., vol. 65, no. 7, pp. 589–599, 2017. link

[2] E. Hendrickx, P. Stitt, J.-C. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis,” J. Acoust. Soc. Am., vol. 141, no. 3, pp. 2011–2023, 2017. link

[3] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The Role of Head Tracking in Binaural Rendering,” in 29th Tonmeistertagung – VDT International Convention, 2016, pp. 1–5. link

[4] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The influence of head tracking latency on binaural rendering in simple and complex sound scenes,” in Audio Engineering Society Convention 140, 2016, pp. 1–8. link