m68k reminiscences

I had done the original Web port of UADE back in 2014 without bothering about its existing implementation. Though I had owned an Amiga back in the nineties, I had never actually written much software on that machine. And though I had done some m68k assembly programming on the AtariST I cannot say that I still remembered much about the respective Motorola 680×0 CPUs.

But it bothered me that UADE’s emulation still doesn’t support certain Eagleplayers – after all these years (i.e. it is incapable to play certain Amiga music files). So I decided that it might be a fun exercise to fix at least some of those flaws myself.. “If you want something done right, do it yourself”… right? 😉

The linked page shows my resulting enhanced webUADE version: As compared to the original, this version has an added “audio.device” implementaion, added multi-tasking support and added support for shared library loading (in addition to various small fixes). It supports (at least) these additional players: Andrew Parton, Ashley Hogg, PlayAY (except ZXAYEMUL), Digital Sound Creations, FaceTheMusic, G&T Game Systems, Janne Salmijarvi Optimizer, Kim Christensen, Mosh Packer, Music-X Driver, Nick Pelling Packer, TimeTracker Titanics Packer, UFO and some Custom modules.

While this little “project” was a fun “code reviewing” exercise (trying to make sense of UADE’s original m68k ASM and C code based emulator implementation), with some reverse engineering (disassembing portions the Amiga’s Kickstart OS) thrown in, it was also a stark reminder of what it meant to program back in the days..


software archeology/puzzle..

This was the first time that I’ve tried my luck at reverse engineering a Windows *.exe with the goal of porting the respective functionality to the Web. Matter-of-factly it was a rather pointless (who needs another old music player in 2022?) but fun undertaking, that gave me a pretext to play with some new tools and refresh my code review skills. The hobby project had one clear win condition: The result would either work perfectly or the project would end as a humiliating defeat..

To come to the point: the project was a success that can be tried out here: www.wothke.ch/webixs

But lets take a step back shall we? “IXSPlayer Version 1.20” was originally created by the no longer existing “Shortcut Software Development BV” about 20 years ago.

The player belongs into the “Impulse Tracker” family but what sets it apart is the way by which it generates the used audio sample data. At the time it must have looked like a promising idea to save the limited Internet bandwidth by using the smallest music files possible.. and via compression and audio synthesis this player uses ridiculously small song files that are only a few thousand bytes long (see status information in the player widget). For comparison: mp3 files are typically several million bytes (megabytes) long.

As we now know, Internet bandwidth was about to evolve quite dramatically and only a few year later, people could not care less how many megabytes some silly tictoc or youtube video might be wasting. So unfortunately the idea with the micro music-files finally did not get much traction.

Music files for this player can be found on the Internet (see modland.com). In the modland.com collection respective files are listed under “Ixalance”. I am therefore also using that name here.
The only *.ixs format player that Shortcut Software ever seem to have released to the public is a small Win32 demo-executable:

This demo player obviously only works on Windows. It only plays one song at a time (in an endless loop). And there is a flaw in its “generated cache-file naming” which may cause songs to load the wrong files and then not play correctly.

I had gotten in touch with the original developers to check if they might provide me with the source code of their player (so that I could adapt it for use on the web – like I had already done for various other music formats in my playMOD hobby project). But unfortunately those program sources seem to have been lost over the years. The above Windows *.exe was indeed the only thing left. So that is what I used as a base for my reverse engineering.

Greetings go to the Rogier, Maarten, Jurjen and Patrick who had created the original player at “Shortcut Software Development BV”. Thank you for letting me use this reverse engineered version of your code.

Non-software engineers can safely stop reading here 🙂 All others might find useful information for their future reverse engineering projects below.

Stage 1

The original developers had told me that the player had been an “ImpulseTracker” based design and that it had been written mostly in C++. This sounded promising since C/C++ can be cross compiled quite easily to JavaScript/WebAssembly using Emscripten. I therefore set out to find some decompiler that might allow me to transform the x86 machine code from the *.exe back to its “original” C++ form. To make it short: If you want to do this kind of thing professionally you should probably buy IDAPro – or as a hobbiest like me you can try your luck with Ghidra and the free demo of IDAPro as a supplement (other tools like boomerang, cutter, exe2c, retdec, etc seem to be a waste of time).

What to expect?

A tool like Ghirda has a sound understanding of different calling conventions used by typical C/C++ compilers (__fastcall, __stdcall, __thiscall). Based on the stack manipulations performed in the machine code this allows it to correctly identify the signatures of the used C/C++ functions 95% of the time (Ghidra struggles when FPU related functions are involved but that can be fixed by manual intervention, i.e. overriding the automatically detected function signatures). Obviously respective tools also know most of the instruction set of the used CPU/FPU which in this case allowed Ghidra to translate most of the x86 gibberish back into a more human readable C form:

With no knowledge about the used data structures that code is still quite far from the C code that this will eventually turn out to be:

The decompiled code will often be a low-level representation of what the machine code does – rather than what the original C abstraction might have been, e.g. though technically correct the below code:

in the original C program would probably rather have read:

Similarly the “1:1” mapping of the optimized machine code:

must still be manually transformed to its “original” form:

Most importantly in order to get meaningful decompiler output it is indispensible to find out what the used data structures are.

But before diving into that jigsaw puzzle it makes sense to narrow down the workspace: In my case I knew that I was looking for IXS music player logic – while most of the executable was probably made up of uninteresting/standard VisualStudio/MFC code (some of which Ghidra was able to automatically identify). String constants compiled into the code then allowed me to identify/sort out additional 3rd party APIs. (The relative position of stuff within an excutable is useful to get an idea of what belongs together.)

After a tedious sorting/tagging process, the result was a set of functions that *probably* belong to IXS library that I am looking for – and that I could now export as a (still incomplete) “C-program” at the click of a button (since Ghidra does not seem to be suitable for an iterative process there was no point to do that just yet).

Time to identify data structures

I had been tempted to presume that virtual function tables should be one aspect of C++ code that can be easily identified in the machine code – but it seems I was mistaken. Even the additional “ooanalizer” tool – that seemed to be promising at first – mostly discovered useless CMenu (etc) classes – but none of the stuff that I was looking for (its extremely slow running “expert system” approch seems to be incapable to reliably search the memory for arrays that point to existing functions.. something that I had to do manually as a fallback).

At this point IDAPro’s debugger also comes in handy: When dealing with virtual function calls a simple breakpoint quickly eliminates any doubt regarding where that call might be going (this is quite crucial when dealing with Ghidra’s flawed stack calculations whenever virtual function calls are involved ).

The fact that I knew that the code was probably “Impulse Tracker based” obviously helped: When seeing an “alloc” of 557 bytes the chance of it not being an “ITInstrument” is just very slim (luckily there is a specification of the respective “Impulse Tracker” file format). Memory allocation/initalization code per se is a good place to look for the data structures used in a program.

At this point you may not know what the variables mean, but you already know what types they are – and offsets used to access data start to make sense.

Once the modelling of the data structures and the “list of the interesting functions” is reasonably complete, it is time to switch gears and enter the next development stage (it will be inevitable to come back to Ghidra from time to time to clarify open issues and it helps if variable/function names perserve some kind of postfix that allows to match them to the original decompiler output during the later development stages). But not before I mention some Ghidra pitfalls:

Ghidra pitfalls

Most of the time Ghidra works pretty well and I would not have been able to do this project without it. But there are instances where the decompiler fails miserably (I am no Ghidra expert and there might be “power user” workarounds that I am just not aware of.):

In some instances you don’t get around looking at the filthy x86 opcodes one by one (something I had hoped to avoid) to figure out manually what some piece of code actually should be doing:

The above shows the original x86 code on the left and Ghidra’s decompiler output on the right. The code seems to use LOGE2 and LOG2E constants and “f2xml” and “fscale” operations – which are known to be used in “C math pow()” implementations. But what seems to be plausible code at first glance is just total garbage – since Ghidra completely ignores the “fyl2x” operation which is actually quite important here.

Another weak point are arrays which are used as a local variable in some function. Ghidra may turn what was originally a 100 bytes array into a local “int” variable and then happily use those 4 bytes as an “arraybuffer” to poke array accesses into random memory locations. (As a workaround it helps to manually override the function’s stackframe definition. Eventhough this has problems of its own, like Ghidra introducing additional shadow vars for some of the data that is already explicitly defined in the stackframe.)

In general, Ghidra’s calculations with regard to pointers into a function’s stack frame (i.e. its “local vars”) leave a lot to be desired (Let’s say function A (among other local variables) has an array and it wants to pass a pointer to that array to a function B that it is calling.). Here the array address calculated by Ghidra is often just wrong. (Again IDAPro’s debugger comes in as a life saver to figure out what those offsets really should be. It seems save to presume that IDAPro is the far superior tool in this regard. But I guess you get what you pay for..)

Ghirda seems to be out of its depth when stuff is “simultaneously” processed on the CPU and on the FPU – and the decompiled code may then perform operations out of order – which obviously leads to unusable results.

Similarly Ghidra seems to completly ignore the calling conventions declared in virtual function tables. Consequently all its stack position calculations may be completly incorrect after a virtual function call.

Finally Ghidra’s logic seems to go totally bananas when a function allocates aligned stack memory via __alloca_probe.

Stage 2

The now exported “C program” from stage 1 is now ready to become an actually functioning program. At this point I obviously want to make as little changes as possible to the respective code: In order to not add additional bugs to the problems that undoubtedly already are present in the exported code. Also there isn’t any point to start cleaning up yet since there is a high risk that additional “program exports” may still be needed which then would require time consuming code merging.

So the first goal is to get the original multi-threaded MM-driver based player to work – like in the original player, just without the UI. That thing has been originally built using Microsaft’s C compiler and libs? Then that’s exactly what I’ll be using. And this tactic actually worked well: still slightly flawed at first but I got the exported code to actually produce audio for the first time.

Since I am aiming for a single-threaded Web environment, the multi-threading and Microsaft specific APIs have to go next: Standard POSIX APIs are available on Linux and they will be available in the final Emscripten environment as well. Therefore a simple single-threaded Linux command line player that just writes the audio to a file is the next logical step.

The code now works fine on Windows as well as in the Linux version. I am confident that the exported code has sufficiently stabilized and it is time for a cleanup (until now everything was still in one big *.h and one big *.c file – as exported by Ghidra). This is the moment where you want to have an IDE with decent refactoring support. Since I am an old fan of IntelliJ, I decided to try the trial version of CLion for the occasion. And though I found the “smart” auto-completion features of their editor rather annoying, the refactoring worked well (and the UI crap can be turned off somewhere in the settings).

Stage 3

But will it also work on the Web? Obviously it won’t! Intel processors are very lenient with regard to their memory alignment requirements, i.e. an Intel processor does not care what memory address it reads a 4-byte “int” from. And this is the platform that the exported program had been originally designed for. The processors of many other manufacturers are more restrictive and require a respective address to be “aligned”, i.e. the address for a 4-byte “int” then must be an integer divisible by 4. The Emscripten environment that I am targeting here shares this requirement. All relevant memory access operations must consequently be cleaned up accordingly – once that cleanup is done the code actually runs correctly in a Web browser.

Feeding the IXS player output to something that actually plays audio in a browser (see WebAudio) requires additional JavaScript glue code: I already have my respective infrastructure from earlier hobby projects and this part is therefore largely copy/paste that I will not elaborate on here.

But one extra bit of work is still needed: The IXS player generates the data for its instruments whenever a song is first loaded and that operation is somethat slow/expensive and blocking a browser tab for several seconds just isn’t polite. But one solution that “modern” browsers propose is the Worker API that allows to do stuff asynchronously in a separate background thread. This means that the orginal program must be split into two parts that then run completely independently and only talk to each other via asynchronous messaging. Finally there is the browser’s “DB feature” that allows to persistently cache the results of the expensive calculation so that it doesn’t even need to be repeated the next time the same song is played. So that’s what I do: Worker asynchronously fills the DB with respective data if necessary and the Player pulls the data from the DB and triggers the Worker when needed. Bingo! (All that remains to be done now is for the Google Chrome clowns to fix their browser and make sure that it isn’t their DB that blocks the bloody browser.. it just isn’t polite!)

we’ve got fan spin..

It seems that the small cooling fan that came with my Raspberry Pi 4B is much louder than anything that I’ve ever had in my PC. Adding some lithium grease to the fan’s dry bearing had somewhat improved the situation (especially for the longevity of the fan) but it was still way too loud – at times it seemed to be louder than my SID chip music contraption – which had been the reason for me getting a RPi4 to begin with (see https://www.youtube.com/watch?v=bE6nSTT_038 ),

I therefore decided to add some PWM fan control to reduce the noise. Respective instructions can be found here: https://www.instructables.com/PWM-Regulated-Fan-Based-on-CPU-Temperature-for-Ras/ and the wiring was quickly done:

But where to connect the respective control wire? Already there are not that many pins available on the Raspberry’s GPIO connector (many of the pins are redundant GND or 5V pins or reserved for some Raspberry internal purpose) such that less than half of the 58 GPIOs can actually be used. Still worse, all of the available pins are from the low-range (i.e. 0-31) GPIOs and are controlled via the same shared control registers. But since I am already using that low-range for the timing critical logic in my SID chip music contraption, the last thing I need is some CPU fan also messing with those same registers. I therefore started to look for *other* GPIOs that could be used instead.

Luckily there are lots of built-in Raspberry features that I don’t need (e.g. bluetooth, audio output jack, camera, DSI display control, etc) so many of the “internally used” GPIOs are not actually used on my machine and they might potentially be salvaged for something else. Unfortunately few of the potentially unused lines are exposed such that they could be patched into easily and the miniaturization used on the Raspberry’s PCB is obviously not very soldering friendly. Also many of the unused components seem to be using some I2C bus, for the kind of which the RPi4 has 8 built-in controllers. Unfortunately there seems to be little publicly accessible documentation regarding which devices are actually sharing the same bus contoller, so respective I2C pins of some irrelevant device may well share that same bus with some crucial device like the PMIC. Salvaging pins safely here may be risky..

Fortunately I found a low hanging fruit elsewhere: In addition to the main block of GPIOs the RPi4 also seems to have a “GPIO expansion” that hosts 8 additional GPIOs used by the firmware (the expansion also seems to be accessed via I2C). One of these GPIOs is CAM_GPIO, i.e. a pin used to turn a potentially connected camera on/off. I’ll never connect a camera to my RPi4 so this is the GPIO that I’ve been looking for. Even more conveniently the signal can be patched into easily since the camera connector has nice big (for a RPi4) pads to solder to. Works perfectly 🙂 (PS: I blacklisted the camera driver just in case..)

In hindsight: With the temperature and fan speed control in place the result is actually somewhat sobering. It seems that a barely turning fan is totally sufficient to keep the SOC temperature of my RPi4 below 50°C. And I still have to find some computationally expensive “killer application” that will force the fan to actually rev up. It seems that the unregulated fan had been turning much too fast for no good reason. Worse it may well be that a decent passive cooler (instead of the silly RPi badge – see photo above) would cover the actual cooling needs just as well without even using a fan.

Be that as it may.. it was a fun exercise and in case some additional GPIO is needed I now know where to find it 🙂

PS: The C program that I have written to control the fan speed can be found here: https://bitbucket.org/wothke/websid/src/master/raspi/cpufan/

Raspberry Pi 4 setup notes

I have recently bought a Raspberry Pi 4 (RPi4) and wasted more time setting it up than I had planned for. These are just some notes for my own use in the hope that they might be usefull the next time that I need to touch the device (and as a bonus they might be useful to other people as well).

For context: I am using the device to compile “userland” and kernel module code (C++/C) directly on the device (i.e. compilation should not take longer than absolutely necessary and when the device crashes it should restart as quickly as possible). I am using the device mostly “headless” accessing it via SSH and Samba but I have also installed a desktop that I can activate/remotely access via VNC-Viewer when needed. I don’t care about the form factor and some additional SSD dangling on a cable doesn’t bother me a bit.


  • Create a backup image of your “hard disk” (or SD card) as soon as you have your correctly customized system! Some of the setup steps take a long time and you DO NOT want to repeat those when your disk gets corrupted or when you need to rollback some “update” that does not work. Note: I’ve had to restore the Raspberry boot disk more often in a single week than I had to reinstall Windows in the past 10 years! It seems that the boot SD card (or SSD) may get corrupted quite randomly – leaving your device in an un-bootable state! At which point it is crucial to have an easily restorable backup. Test that your backup actually works (before you need it)! I’d recommend to have two “identical” drives so that you can always test the backup using the redundant drive. Use whatever backup software you like but *IMMEDIATELY* test if the backup that you made actually works (see “Create a backup..” section below)!


  • Before assembling the case of your RPi4, take off the sticker at the back of the cheap chinese CPU fan that came with it. Put a drop of lithium grease on the fan’s axle before you put back the sticker. (otherwise the fan will soon have startup-problems, make screeching noises, any maybe have undesirable effects on your supply voltage – at ~200mA that fan is probably one of the more power hungry attachments to your RPi4 and it might draw even more while not spinning freely).
  • Working with a class-10 micro SD is unbearably slow. Using an SSD (even a slow one) via USB instead, makes a huge difference. From an SSD the desktop runs OKish (even with only 1GB RAM) – which was totally unusable when booting from the SD card. So instead of wasting money for a micro SD card that you might end up not using, directly go for an SSD. I’ve read reports that sometimes SSD’s seem to be causing “low voltage” issues, but the two SSDs that I tried so far both drew between 0.06-0.10A which should be handled easily by the RPi4.
  • In spite of your intention of using the device “headless” you’ll find yourself directly pluggin in the display more often than you care to remember! (Whenever the device fails to boot or connect to the network, there is just no other way than to plug in a display to see what is going on.). Believe me, fiddling with the DVI connector in the back of your display to toggle back and forth between PC/Raspberry gets annoying very quickly. So check if your display already allows to connect two devices at the same time, or while shopping for that riddiculous micro-HDMI cable, make sure to also get a respective DVI/HDMI switch. (In principle the same applies to the USB cables used for keyboard and mouse.)
  • Install a regular “desktop” distribution of the Raspberry OS: A “lite” version might save you a few GB initially (who cares since the SSD has probably more space than you’ll ever use) and you’ll probably end up wasting extra time manually installing that desktop stuff later anyway (e.g. when you need the “SD Card Copier”). Also on the desktop the setup of the initial WiFi connection can be done easily at a click of a button rather than having to dive into respective config files.
  • As a first step you should directly connect a display to check that a newly installed device starts up properly (until you have a correctly functioning network setup). The device starts with the typical Linux boot messages (screens full of them). So if the screen stays blank then check that you did use the leftmost HDMI connector on the Raspberry, if that is plugged in correctly then there is probably a configuration issue (see “/boot/config.txt”) that causes the display to not recognize the device or vice versa (The Raspberry will turn off video output completely when it does not detect the display!). Plug your SD card (or SSD) into some PC and you can edit the “config.txt” without using the Raspberry: Hard-code a resolution that you know your display can handle – if that doesn’t help start uncommenting the various “HDMI” related entries (e.g. start with “hdmi_force_hotplug=1”).
  • If ever you get stuck on the desktop login screen, press “CTRL-ALT-F1” to get a regular terminal window (“CTRL-ALT-F7” will take you back) : stupidly “raspi-config” allows to select “boot into desktop” even if respective x11 stuff isn’t installed on the device – but it will not automatically install required missing components; i.e. the desktop login screen may show even when the desktop is NOT available!
  • If for some reason the main boot sequence gets stuck, it may be usefull to add the following text at the end of the existing line in the “cmdline.txt” (on the boot partition): ” init=/bin/bash” This causes the Raspberry to directly boot into a bash, thus giving access to the device.
  • For console users: Add alias names for commonly used commands in .bashrc – usefull when the same lengthy commands are repeatedly needed.

Network setup

Run “ifconfig” to find out the MAC addresses of the network adapters (eth0 and wlan0) in case you need that information to configure your LAN’s router / DHCP server.

When booting to the desktop, you may just use the respective widget in the status bar to directly connect to some Wifi access point (this has the advantage that you’d immediately notice if your Wifi access point had crashed).

Otherwise first connect via LAN cable and edit the “eth0” related entry in /etc/network/interfaces (e.g. give it an IP and gateway address that works for your LAN). Run ‘raspi-config’ to activate and configure the SSID of your Wifi – if any (note: the ‘raspi-config’ UI does not show the SSID eventhough is actually is saved). Run ‘ifconfig’ to verify if your wlan0 adapter has an IP address. There may be more to it than this if you have specific security requirements – which I don’t have.

Use SSH and VNC

First run “raspi-config” and activate “SSH” and “VNC” (if you want to remotely use the desktop) in the respective “interfacing” section. (You can easily test if ssh works by opening a respective local connection directly on the Raspberry in some terminal.) If Win10 then is not able to establish a respective connection, then it is likely a Windows problem: “Putty” (SSH client) and “VNC-Viewer” both just DO NOT work when started with regular user permissions. But when started with “Administrator” rights those issues just disapear! What a POS! (also the respective apps may need to be given rights in whatever firewall is used..)

Install the OS on an SSD

When I had first installed the standard desktop version image of the Raspberry OS on a Samsung 32GB class 10 SD card, it took about 3 minutes to boot to the login screen. And that screen then was so unresponsive, that it took 10 secs to just change from the name-input to the password-input field. Similarly Chromium was so slow as to make it utterly unusable.
Booting that same image from a very slow SSD (one that I still had left and which I connected with some cheap USB-adapter cable – with no additional power supply) takes less than a minute and booting into a shell is obviously much faster. (I did meanwhile experience various disk corruptions and I don’t know if those are due to my bad quality SSD, due to some power supply issue, or something else.)

My fairly recent OS distro might have been capable to boot from SSD out of the box (I did not check), so the first step should have been to directly write the downloaded Raspberry OS image to an SSD and check it the Raspberry boots from it (without even bothering with an SD card).

Unfortunatly I first started with a SD card and later followed the below steps that may or may not have been necessarry (this updated my linux kernal from “5.4.83-v71” to “5.10.25-v7l+”..). These steps might be needed in case the bootloader in the RPi4’s eeprom is too old.

sudo apt update
sudo apt full-upgrade
sudo rpi-update
sudo rpi-eeprom-update -d -a
sudo raspi-config

[select “latest” boot rom in “boot options” in “raspi-config”; also select that when SD and USB disk are plugged in
try to boot from USB first – or whatever preference you might have]

Use the “SD card copier” to clone the SD card onto the SSD (or try some other tool if you are feeling lucky).


Create a backup of your “SD card” (or SSD)

If you can connect two drives at the same time then the “SD Card Copier” from the Pi Desktop’s “Accessory” menu (you’ll not be able to use this with the “lite” OS) might be usefull to directly clone a disk (“sudo apt install piclone” to install the tool if necessary).

I like to have my backups as “image” files on my PC and I initially tried to use “Win32DiskImager” to create / restore respective images – but for some reason respective restored images usually failed to boot (e.g. during boot it might cause countless error messages: “EXT4-fs error… -ext4_find_entry:1536:inode#…: unable to read itable block.” or a respectivly restored image did not boot at all). I then tried to create an image using https://github.com/framps/raspiBackup – but that created a corrupt gzipped image file that did not even setup the partitions when restored via “Win32DiskImager”.

The last tool I tried was the free version of “Macrium Reflect” and the respective image could actually be restored to a different disk without any problems so I guess I’ll be using that for now. (I prefer to create a backup while a disk is offline, i.e. while there is no Raspberry OS using it.)


Add desktop to the lite OS version

(see https://taillieu.info/index.php/internet-of-things/raspberrypi/389-raspbian-lite-with-rpd-lxde-xfce-mate-i3-openbox-x11-gui )

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

sudo apt-get install --no-install-recommends xserver-xorg
sudo apt-get install --no-install-recommends xinit
sudo apt-get install raspberrypi-ui-mods
sudo apt-get install --no-install-recommends raspberrypi-ui-mods lxsession


Share folders via Samba

I am copying files between my Win10 PC and the Raspberry via folders that are shared on the Raspberry. Samba seemed like a good enough solution for my needs and I am always using the same “pi” user:

sudo apt-get update
sudo apt-get install samba samba-common smbclient

sudo smbpasswd -a pi

in case of config changes the respective server will need to be restarted:

sudo service smbd restart
sudo service nmbd restart

Ditch/or rename the existing /etc/samba/smb.conf and replace it with something like this (adapt to whatever folders you want to share);

workgroup = WORKGROUP
security = user
force user = pi
encrypt passwords = yes
client min protocol = SMB2
client max protocol = SMB3

comment = Samba-Test1
path = /home/public
read only = no

comment = Samba-Test2
path = /home/pi
read only = no
public = yes
writeable = yes
browsable = yes
create mask = 0777
directory mask = 0777

With the above configuration Win10 can access respective folders. Use “Map network drive..” by right-clicking on “My PC” in Explorer and enter the path, e.g. \\SambaTest1
Check the “Connect using different credentials” checkbox so that you can enter “pi” as the user name. If the connection fails, then this is likely the doing of your firewall software. Open whatever Firewall you are using and check the connections that it has recently blocked. If you see the IP of your Raspberry, then make sure to educate the firewall that this is a trusted device.


Build kernel from source

This is a point that may not be relevant for everybody. But since I want to build my own kernel module (aka device driver) I need the matching kernel sources to the kernel that is actually used. Since I don’t want to cross compile on some other device I am using this approach here directly on the Raspberry (see https://www.stephenwagner.com/2020/03/17/how-to-compile-linux-kernel-raspberry-pi-4-raspbian/ ):

sudo apt install raspberrypi-kernel-headers build-essential bc git wget bison flex libssl-dev make libncurses-dev

sudo apt update
sudo apt install git

mkdir kernel
cd kernel

git clone --depth=1 https://github.com/raspberrypi/linux

cd linux
sudo make bcm2711_defconfig

sudo make menuconfig

sudo make -j4 zImage modules dtbs
sudo make modules_install

sudo cp arch/arm/boot/dts/*.dtb /boot/

sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
sudo cp arch/arm/boot/zImage /boot/kernel-RT.img

add in “/boot/config.txt”:



digital waveguide based audio synthesis of a piano


After my past retro “SID chip” audio synthesis experiments I thought it might be interesting to try out what more modern audio synthesis approaches have to offer. It isn’t a Steinway grand piano yet, but contrary to a Steinway it can be turned into a bell tower with the turn of a knob 🙂

The implementation is based on Balázs Bank’s thesis: “Physics-Based Sound Synthesis of the Piano” (see http://home.mit.bme.hu/~bank/thesis/pianomod.pdf) and the various papers that are cross referenced in that document. I highly recommend reading Bank’s thesis since it gives a much broader overview of the subject matter than the more specialized research papers usually do.

I have to admit that my math proficiency is somewhat rusty and I seem to have forgotten much of what I had once learned more than 20 years ago. In addition, much of the audio signal processing theory is simply new to me. Consequently some of the terminology used in the respective papers was totally alien to me (some of it still is) and it sometimes felt like reading a chinese text automatically translated by Google. I recommend Julius O. Smith’s page here (the “PHYSICAL AUDIO SIGNAL PROCESSING” section in particular), which provides a ton of background information useful in this context: https://ccrma.stanford.edu/~jos/Welcome.html

My “webPiano” page should work in any browser that supports WebAudio. However the UI was done for desktop computers and the layout will probably not work well on small smartphone displays.

Joomla in hindsight

It’s been some years that I had played a bit with Joomla (at the time version 2.5 was “the thing”). Since it was only kind of an learning experience and the resulting pages were of a hobby project nature, I did not spend any money on extensions but used what was available at the time in some “free to use” version. Still I ended up using about half a dozen 3rd party “extensions” (basic “booking” functionality, etc) some of which I had to customize significantly to make them cover my requirements. But the result did what I wanted and it had probably taken less time than if I had programmed everything from scratch myself.

One “doesn’t look a gift horse in the mouth”.. but some of the “Joomla/extensions” code was obviously subpar, and their authors apparently hadn’t even had an idea how to properly overload methods in sub-classes nor had they known the difference between static and non-static methods, etc. .. but I guess you get what you pay for.

I cannot say that I was thrilled by Joomla’s approach to composing pages either .. writing some “article” and putting it into some “menu” structure is easy enough. But having to define separate “modules” elsewhere (e.g. for the included JavaScript files that a specific page might be needing) and then attaching those via tedious/slow admin-GUI “checkbox clicking” to some “menu item” or some placeholder from the site’s template, etc.. Alas, whenever I came back after some month to make just some minor adjustment it always took excessive amounts of time just to remember how those things where actually connected (since much of the stuff comes from the DB it doesn’t help to do a quick text search on the file system to look for something).

Green banana software

Joomla is actively “improved” and there is what looks like a continuous stream of new releases. A look into the respective https://developer.joomla.org/security-centre.html shows that respective releases are not always an improvement and some severe security flaws where actually absent in older versions and then introduced in some “cool new update” (like some “Severity: High” bugs (CVE-2019-10946 that affected versions 3.2.0 through 3.9.4, CVE-2019-9713 that affected versions 3.8.0 through 3.9.3, etc).

Some other software flaws actually go unnoticed for years before they are eventually noticed by the Joomla developers, e.g.  “Severity: High” CVE-2017-9933 affects 1.7.3 – 3.7.2. (I can almost hear the 2016 sales pitch of the Joomla acolytes, “what you are still using 2.5.9? you must upgrade to 3.5 immediately! older versions are such a security risk!”.. haha, very funny..)

The depressing thing though is that (even for the officially supported/current Joomla versions there are no separate security patches. Instead the official advice always is to update to the next version – which supposedly fixes the problem. This means that you cannot get the 3 files that fixed a specific bug separately but instead you may get a 10Mb zip file that introduces a ton of other changes at the same time (the only exception seem to be the EOL fixes here: https://docs.joomla.org/Security_hotfixes_for_Joomla_EOL_versions/de ).

Joomla’s versioning and updating policy is weird .. or rather disturbing. Some kind of automated updating support is available in the admin GUI – except that it is “somewhat limited”:  My first Joomla instance had been using 2.5.4 and the second one 2.5.9, interestingly their admin-GUI tells me that 2.5.4 should be upgraded to 2.5.5 while my 2.5.9 instance happily tells me that no automatic upgrades are possible. So even if I had ever wanted to update to the last 2.5 version (which I think would have been 2.5.29) then even what should be a minor-sub-version update seems to be something too risky to perform automatically.. seriously?

Regarding security

As indicated above, the software quality of the Joomla core is not that great and
there are tons of more or less severe issues that “pop up” in the various versions. I’d say it is prudent to not expect much with regards to Joomla security and select potential projects accordingly.

From the beginning it is probably a good idea to restrict access as far as possible, e.g. by activating the web server’s basic HTTP authentication for the “/administrator” GUI functionalities.

A Joomla instance isn’t suitable for a “never touch a running system” approach and most sites will probably be trapped in the “update to the very latest version” hamster wheel, thereby volunteering as beta tester for whatever green bananas Joomla wants to field test. (You better not use any 3rd party “extension modules” unless you are absolutely confident that the respective provider will still be there tomorrow to get you an updated version for the next Joomla version – or else you’ll end up rewriting those portions of your site.)

Personally I chose the different approach of just back-merging the code changes for the “Severity: high” Joomla fixes into my old code base. Thus avoiding to find replacements for the long gone “extensions” still available for my old version. (This is of course an absolute  no-go and I am most certainly a risk for the Internet and maybe for world peace as well…)

Green banana software meets planned obsolescence

It never fails to amaze me how PHP could ever grow such a large following: The poor design decisions taken in early “versions” are so obvious that even newer versions (thankfully) start to reverse them (see “backward incompatible changes”).

But hey, everybody has the right to design a crappy programming language and then learn from his mistakes. The problem with this crappy language is that it comes with an expiration date: “Each release branch of PHP is fully supported for two years from its initial stable release“.

Like a light-bulb that wants to be replaced after 1000h of use. Only here it works even better.. no need to be broken, let’s replace every 24 month. Add some “backward incompatible changes” and you have a printing machine for money/extra work.
So much wasted opportunities.. just imagine “ANSI C is end of life and all the old programs must be ported to Java8 by the end of the month!”.. splendid, why did nobody think of that one earlier?

So it happens that my hoster informed me that “he will no longer be hosting PHP5 by the end of the month and would I please migrate everything to PHP7”. But of course! I had no plans for the weekend anyway, f*** you very much!

Obviously Joomla 2.5 could not know about PHP7 yet and the Joomla support doesn’t want anybody to use those old very dangerous legacy versions anyway. (Support in the Joomla universe means: getting help when migration to the new version went south.)

Spoiler: In spite of the “Joomla support” propaganda – old Joomla 2.5 (with the manually added security patches) can be “easily” ported to PHP7.

  1. Search for “->$” to find the following indirect variables usage pattern: change “$a->$c[$b]” to “$a->{$c[$b]}” in order to preserve the original semantics in PHP7
  2. Search for “$key = key($this->_observers)” which no longer works here since the foreach loops no longer update the internal state of the array (add “$key++;” within the foreach loops instead) .
  3. Replace preg_replace with respective preg_replace_callback based impls.

After this the Joomla instance will start again and you can go after the deprecation warnings (etc) if you want to cleanup properly.

I do not recommend to use an old version – or any Joomla version for that matter (you saw the Joomla security issue tracker)! but if you are desperate..

PlayMOD online chiptune music player

With meanwhile 21 different JavaScript/WebAssembly based music emulators in my toolbox it seemed like a logical thing to do..

PlayMOD combines all of my emulators within one UI to provide online browsing and music playback for some of the largest “legacy computer music” collections available on the Internet: The modland.com collection contains about 450’000 music files from various legacy home computer and game consoles and the  vgmrips.net collection adds another 35’000 primarily arcade system songs. The PlayMOD web page does not host any of the music files but directly refers to the data from the respective collections (i.e. the page will only be usable while the respective ‘modland’ and ‘vgmrips’ servers are available).

There are hundreds of different legacy music file formats involved and the available emulators currently allow to play more than 98% of these. This avoids having to manually find and install a suitable player for each exotic format (which otherwise may be a tedious task).

The available music files document the evolution of computer music during the past 40+ years. Having everything consolidated in one place allows to easily compare the capabilities of respective legacy sound systems (e.g. by comparing how the compositions of the same composer sounded on different platforms) or to just indulge in reminiscences.

The name of the project was chosen to reflect the fact that it originally played the “modland” collection. It may be somewhat misleading since the term “mod” usually refers to the subset of “computer music” that is created via some kind of “tracker software” (an approach that became popular at the time of the Amiga home computer and which is still in use today). However, in addition to actual mod-files the used collections also provide a large number of other music formats, e.g. many of the older stuff would be usually referred to as “chiptune music” today. You may use the Basics tab directly on the PlayMOD page for more background information.


The PlayMOD user interface is based on the design/codebase originally created by JCH for his DeepSID. I wasn’t keen on creating a UI from scratch so I am glad that I could reuse some of the already existing stuff.

Obviously, legacy computer music could also be preserved by just creating some recording from the original hardware, and as can be seen on youtube, many people already use that approach. Indeed the approach of using an emulator will not usually be as accurate as the use of a real HW recording (unless lossy recording like mp3 on low quality settings is involved). Certainly, recordings typically use up much more memory and consequently network bandwidth, but that isn’t the issue that it might have been 10 years ago.

However, from a “legacy computer music preservation” perspective the emulation approach has the benefit that it not only preserves the end result but also the steps taken to achieve it. Also it allows for interactions that would not be possible with a simple recording.

Example: The “Scope” tab in the below screenshot shows the output of the different channels that some “Farbrausch V2” song internally uses to create its stereo output, i.e. an emulation approach allows to look at the “magic” that is happening behind the scenes.


Similarily a respective emulation could still be treaked, e.g. by turning certain features off, or by using different chip models.


sprinkler WiFi update..

ATMEL_TQFP64_200Starting with a little retrospection: So far I had built two sprinkler controllers that I remotely control via a home grown (Java) PC software (see my earlier sprinkler posts).

These devices are based on a cheap ATmega128 micro controller which requires a separate breakout board and a bit of soldering to get going. With some custom PCB this would actually be a neat chip to use – with more IO pins than you’ll probably ever use. But with the breakout board it uses quite a lot of space and it gets very annoying when one of them burns out (e.g. due to some surge on the power grid) and you quickly need to come up with a replacement.

433Mhz-SI4432-Wireless-RF-Transceiver-Module-with.jpg_220x220The devices then use a Si4432 transceiver to communicate with the PC application. The Si4432 theoretically should have an impressive “best case” range and is fairly inexpensive too. As always, solid obstacles (like walls) are the limiting factor for the real range of this transceiver: In my case the obstacles are two tiled roofs and I am ending up with a maximum range of around 50m-70m.

However I soon found, that the Si4432 is pretty fragile (i.e. these things just keep dying for no obvious reason) and most of the really cheap PCBs use 1.27mm pin spacing which makes it a pain in the ass to replace them or to even hook them up to your 2.54mm prototyping boards. I had therefore opted for a somewhat more expensive version with 2.54mm pin spacing. Unfortunately there seem to be different builds (I already saw three) of very similar looking (blue) PCBs, and to make things worse some of these are just incompatible. So when I wanted to just add one more device to my existing zoo I very annoyingly ended up replacing all existing transceiver PCBs because the new version that was sold at that time no longer worked with the older models I already had. Also transmission speed of the Si4432 is not that great (to some degree that may be my fault since I traded in some speed in favour of a more reliable connection).

New approach

wemos-d1-mini-proBased on the above experiences I felt that I should try something new for the 3rd device that I was about to build (something to be used in my greenhouse). I decided to give WiFi a chance after all and replace the ATmega128 & Si4432 with a “Wemos D1 mini Pro” (ESP8266EX based). The idea is that it should be much easier to get some standard WiFi range extender if necessary and the transmission speed should be much higher in any case. The respecive board costs around five bucks (with external antenna & cable) so it won’t hurt if ever there is the need to replace it.

Obviously I will keep the old devices that I already have and I therefore decided to extend my existing software such that it also is capable to deal with WiFi/UDP based communication. I just completed the respective software changes and everything seems to work like a charm (and blazingly fast):


Meanwhile everything is nicely packed into the device case and ready to go 🙂


For now just some ESP8266/Wemos D1 related observations. In principle, migrating existing Arduino code to the ESP8266 is pretty straight forward: Most of the existing libraries that I had been using previously, also work on the ESP8266 (I did only have an issue with one specific EEPROM library – which I ended up not using).

But there are significant differences to the “normal” microprocessors I had been using previously:

  • WiFi handling (i.e. the built-in behind the scenes functionality) is the ESP8266’s first priority.. Your application code IS NOT and it MUST NOT get into the way or the device will just crash (there mustn’t be any long running sections in your code.. “yield()” generously – this is the annoying part: having to litter your code with ESP8266 specific yield() code that has absolutely nothing to do with your application logic).
  • The memory model is somewhat more complex than you might initially realize (see http://cholla.mmto.org/esp8266/where.html). Code like interrupt handlers must be in the “correct” area (i.e. RAM) or the device will unexpectedly crash.
  • Power supply is crucial: Insufficient power supply – e.g. via a weak USB connection – may be the cause of unexpected crashes.
  • Lastly the built-in hardware watchdog will also crash the device if not “fed” regularily.

All of the above may lead to “unpredictable” system behavior and tracking down the actual root cause of a problem can be quite an annoying task. Using a tool like https://github.com/littleyoda/EspStackTraceDecoder may then be your best chance to at least get some idea where the problem might come from. (The reference here may also be useful.)

To make things worse I found myself having problems with the IDE I am using (Sloeber 3.0) that may or may not be ESP8266 specific: It may be a general problem of hobbyist IDEs (or Eclipse in particular) but the IDE regularily DID NOT build/upload the correct code. And the ESP8266 oftentimes does not restart properly after a “reset” triggered by the IDE (which may be linked to inadequate USB power supply). I ended up chasing phantom problems that disapeard after a “clean build” and un-plugging of the device. Obviously it is rather annoying when you cannot depend on the correct functioning of your tools!

Regarding the Wemos D1 (ESP8266) hardware: IO pins are scarce and many do have built-in limitations (e.g. they are used to control how the device boots, etc). It seems to be a good idea to use I2C whereever possible to preserve the little that is available. (This may become a pressing issue in case you also intend to use SPI – which may be a prerequisite to hook up things like an SD-card reader.)


I had not been concerned with the topic of licensing while I was still in midst of trying to get the basic functionality to work. (Most people probably aren’t while dealing with their Arduino toy projects.) Therefore I had not payed much attention to the licensing conditions of the Arduino libraries that I had been using either.

For a large part respective libraries use LGPL and that license allows you to use some library in pretty much any way you like. BUT…

… there also are libraries that use GPL – which may/should be considered a problem: I for one do not write blanco checks nor do I like the idee of having 3rd parties exploit my work financially without me getting a single dime of the proceeds. As a matter of principle GPL therefore for me is an absolute NO-GO.

Luckily, in the context of Arduino libraries, functionality is usually so small that you can easly write a replacement yourself. And before subjecting your own code to a shitty GPL license you are probably better off writing your own library.

I dumped two respective libraries and replaced them with my own (see https://github.com/wothke/justASK and https://github.com/wothke/TinyLCD). So if ever I release my code it can be CC BY-NC-SA by default and in case anybody wants to make money with it he can get a commercial license).


Playing with WebAssembly

I recently noticed that ’emscripten’ meanwhile allows to also generate WebAssembly output. WebAssembly is touted (see e.g. https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-than-asm-js/) to use less space and to run more efficiently (i.e. faster) than previously existing web technologies.. sounds great!

With my portfolio of various ’emscripten’ compiled chiptune music players this seemed like the perfect opportunity to just give it a try (If you want to try this yourself, make sure to really get the latest ’emscripten’ version! Also be warned that the new ‘clang’ version that ’emscripten’ is using now, is more strict with regards to existing C/C++ standards and you may need to fix some of your old bugs in the process..).

Due to the fact that web browsers will load the *.wasm WebAssembly files asynchonously, existing bootstrapping logic may need to be reworked (you’ll need to wait for  the notification of the loaded ’emscripten’ module that it is actually ready – don’t even think about using the SINGLE_FILE hack, it won’t work in Chrome!).

In the case of my chiptune players, migration fortunately wasn’t a big deal (my player was already prepared to deal with asynchronously loaded stuff) and soon I had the first *.wasm results. And from a size perspective, those output files already were good news: In their old asm.js incarnations some of my emulators are rather bulky and in total the size of the nine emulators originally summed up to more than 11MB. The better optimizer used in the new ’emscripten’ already managed to bring those asm.js versions down to about 10MB – but with *.wasm that now shrinks to 5MB. Nice!

I then went about measuring the performance of the different versions (I tested using Chrome 64 and FireFox 57 on a somewhat older 64-bit Win10 machine). I was using my all-in-one “Chiptune Blaster” page as a testbed (see https://www.wothke.ch/blaster/ and https://www.wothke.ch/blasterWASM/). I patched the music player “driver” to measure the time actually spent within the various emulators while they are generating music sample output. I started measuring after each emulator had already returned some sample data (i.e. its program code had already been used) and then measured the CPU-time that it took to generate 10 seconds worth of sample data (i.e. the numbers in the below table are “CPU ms / sec of music output data”, i.e. smaller is better):


I repeated my measurements multiple times (6x) and eventhough the results were – for the most part – reproducable, they fluctuated considerably (e.g. +/-10%). Any single digit percentage measurement is therefore to be taken with a pinch of salt. In Chrome there were even some massive hiccups (maybe some background garbage collection? see “(*) worst times” in parenthesis). The above table shows the “best” result that I ever observed for the respective scenarios.

Interestingly with regards to the “better performance” claim, the results are not really conclusive yet. There are some finding though:

  • Chrome users may typically experience a massive performance improvement from WASM.
  • FireFox’s asm.js implementation already performs much better that Chrome’s. For Chrome users, WASM here is actually only the 2nd best choice – for most scenarios the performance benefit of switching to FireFox here is even bigger.
  • For FireFox users the situation here is more complicated. It really depends on the specific program: Some may run massively faster, but some may actually run slower than their asm.js equivalent!

PS: I had only briefly looked at Edge but asm.js performance is slightly worse than Chrome’s and WASM is almost 2x slower than Chrome’s.

An important thing that I did not mention yet are startup times: WebAssembly is designed to be parsed more easily than respective JS code, the asynchronous loading may then also speed things up (in case your browser really puts those multiple CPUs to good use..).

And indeed this is where Chrome (and even Edge) actually shines: For the old asm.js version of my page it takes about 3 seconds for Chrome (4 seconds for Edge) to locally load/display it on my PC. For the new WASM version it’s barely more than 1 second (also for Edge)! FireFox somewhat disappoints here: It also improves on the 4 seconds for the old asm.js page, but the new WASM version still takes 2 seconds to load/display (Chrome/WASM may not be too bad after all.).

  • So WebAssembly may not always improve execution speed, but combined with the greatly improved startup time it is really nice!

sprinkler update..

This is just a little update regarding my earlier DIY sprinkler project: The old Rainbird product (see left photo below) had finally died completely and the time had finally come to put in “version 1.0” of my home-grown replacement (see photo on the right).

old     controller

As compared to the original post I had meanwhile replaced the ATmega328P (i.e. the Arduino ProMini) with an ATmega128: This microprocessor gives me more space for the program code – but it still is a very cheap IC.. However it comes at the price of some extra soldering (see green PCB on the photo above). Anyone interested in using this chip as a replacement for less powerful Arduino’s should have a look here: https://github.com/MCUdude/MegaCore

Previously I had been performing all of my testing with brand new 12V DC solenoid valves. The question was whether or not my new 12V DC controller would also work for the old 24V AC valves already in place from the old installation:

To my great relief it works perfectly and I can finally control the things remotely from my PC 🙂