I recently got a pair of the Nuand BladeRF SDR transceivers, and I’ve got about a days worth of experience with them. Of course, first thing’s first, I had to print the BladeRF Coaster by Piranha32. This is especially so because there are a few large tantalum-looking capacitors on the bottom. Before I had this printed, I used some small machine screws with bolts and made a quick set of standoffs. The package came with a nice blue USB3 cable and a pair of SMA jumpers; I assume these would go to their transverter, which I don’t have, but they’re nice to have anyway. There aren’t any instructions in the box, but for a product like this the assumption is that the user knows what they’re getting and how to use it. This is a reasonable assumption, and I had no problem finding all the relevant documentation.
I’m an ardent mac user, and this sometimes poses a problem while using specialized hardware and applications. It wasn’t until the last few months that the recent versions of GNURadio worked through MacPorts. Luckily it does now, as does all the BladeRF software. Going from nothing to a working software environment took minutes. One thing that you have to make sure and do it download the most recent FPGA code, as you’ll have to re-load it every time you boot the BladeRF.
All software defined radios have an Achilles’ heel. If the DC offset of the I and Q baseband signals isn’t minimized a constant “tone” at zero Hz will be up-converted to whatever frequency the final spectrum is. Also, if the relative magnitude and phase of I and Q aren’t matched (and precisely 90 degrees) there will be a image of the desired signal mirrored across the up-converter frequency. The BladeRF wiki has a short article that shows how to measure the correction factors to use to minimize these effects. This article documents my attempt of deriving the correction factors for one of the two boards.
To start, I tuned the board to 450MHz. Then, I decided that a 100KHz sine would be appropriate as a baseband tone. What happens is that, in software, the 100KHz tone is generated, then it is up-converted to 450.1MHz. The spectrum below is the result of that test without any corrections applied. The spur resulting from the DC error is about 25dB down from the desired carrier, and the image is closer to 35dB down. It’s not really that great.
Playing around with the values according to the instructions, I was able to improve these results quite a bit. The DC error is now 58dB down from the carrier (ignore the marker table) and the image is almost 63dB down. This is pretty respectable. Now, there is another troubling question. What’s up with the spikes in the phase noise?
I asked on the #bladerf room on freenode, and they agree that it’s not normal. The spikes are exactly 7.8 KHz apart, so I would start looking for things that happen at that frequency. I wish I had measured how distant the largest one was from the carrier and zero. That could have been good to know.
Oh, wait, I did :). The main tone is 63KHz away from the main carrier and 37KHz away from zero. There was some speculation that it could be artifacts from the quantization of the software-generated sine wave or that the tuning algorithm could be at fault. For fun, I ran the same test without generating any sine wave at all, so all you’re seeing is a large zero tone from a constant DC offset. I do think that the quantization explanation makes sense because that would manifest as phase noise, and the spikes are centered about the modulated tone, rather than the zero-spur.
The same artifacts should still appear there if there was any kind of a problem with the power supplies of the board, for example. It seems very clean, and I’m happy with the phase noise performance of the board. You’re probably seeing the phase noise of the rigol in this plot rather than the BladeRF.
In all, so far, I’m very happy this these. I’m very excited about their potential.