Category Archives: Linux

Cross Compiling Rust (Notes)

I’ve been fiddling with cross compiling Rust to our ARM-64 platforms. I’m hitting a wall because Rust elfs require the system loader and C libs. I’m writing these notes to document what I’ve found so far. Would like to come back and discover how to build a static ARM-64 executable. Started here: https://github.com/japaric/rust-cross

Step 1. Install Rust. https://www.rust-lang.org/tools/install

Step2. Install a C cross compiler. Rust uses the C linker.

% sudo apt install gcc-aarch64-linux-gnu 
or 
% sudo dnf install gcc-aarch64-linux-gnu

Step 3. Add the cross compiler targets to Rust.

rustup target add aarch64-unknown-linux-gnu
rustup target add aarch64-unknown-linux-musl

Step 4. Add to the cargo config

…marvin:hello-rust% cat ~/.cargo/config
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[target.aarch64-unknown-linux-musl]
linker = "aarch64-linux-gcc"

Step 5. Create a simple new project.

cargo new --bin hello-rust
cd hello-rust

Step 6. Build

cross build --target aarch64-unknown-linux-gnu

Now examining the final executable, I’m finding it requires the glibc loader and libraries. We’re using uClibc because historical reasons.

% file ./target/aarch64-unknown-linux-gnu/debug/hello-rust
./target/aarch64-unknown-linux-gnu/debug/hello-rust: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=f62e92025124bc8018570c700b6e5faeecfdd671, for GNU/Linux 3.7.0, with debug_info, not stripped
% readelf -a ./target/aarch64-unknown-linux-gnu/debug/hello-rust
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040
0x0000000000000230 0x0000000000000230 R 0x8
INTERP 0x0000000000000270 0x0000000000000270 0x0000000000000270
0x000000000000001b 0x000000000000001b R 0x1
[Requesting program interpreter: /lib/ld-linux-aarch64.so.1]
[snip]
Dynamic section at offset 0x3aac8 contains 30 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]

I tried the Rust musl cross compiler but hit a wall with some compile errors around, I think, hard/soft floats. The musl cross compiler I found here: https://toolchains.bootlin.com/releases_aarch64.html

There are notes explaining musl can be static linked. https://github.com/japaric/rust-cross#how-do-i-compile-a-fully-statically-linked-rust-binaries

I should google those errors. Found:

https://github.com/rust-lang/rust/issues/46651

Looks like it might be fixed but not in a released Rust? Need to investigate further.

Update: Installed the nightly build and voila! static link musl elf.

% rustup toolchain install nightly
% rustup default nightly-x86_64-unknown-linux-gnu
% cross build --target aarch64-unknown-linux-musl
% file ./target/aarch64-unknown-linux-musl/debug/hello-rust
./target/aarch64-unknown-linux-musl/debug/hello-rust: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, with debug_info, not stripped

Copied the resulting elf to my firmware. IT WORKS! It’s big, but it works.

/var/tmp # ls -l hello-rust
 -rwxr--r--    1 root     root       3878608 Oct 26 12:55 hello-rust*
 /var/tmp # ./hello-rust
 Hello, world!

Command Line JSON

I just stumbled across a wonderful tool: command line JSON validation/pretty-print.

https://docs.python.org/3/library/json.html#module-json.tool

I often work with JSON with our routers. I use curl to read from our API endpoints which return JSON. Getting back large blobs of JSON is useful but hard to read all jumbled together.

% curl –basic –user admin:${CP_PASSWORD} http://172.16.22.1/api/status/wlan/radio/0

Command line output of router wifi survey api output.

Now instead I can pipe to python3 -m json.tool and the JSON will be cleanly formatted and humanly readable.

autoreconf

I love GNU autoconf. I remember the days of downloading a .zip or a .tar.Z of source and having to manually edit a config.h, full of symbols I didn’t understand. GNU autoconf came along and now we just ./configure && make && make install.

Building Kismet from source, I had a difficulty with libusb which is not installed on my linux laptop. I hadn’t installed libusb because I hadn’t needed it yet. Building Kismet on another machine worked fine (libusb installed).

./configure
...
checking for libusb... no
configure: error: Package requirements (libusb-1.0) were not met:

Package 'libusb-1.0', required by 'virtual:world', not found

I was curious about Kismet’s modularity. OK, probably just need to check the configure flags. Usually a properly modular program would allow me to disable USB.

% ./configure --help...
 --disable-usb Disable libUSB support
...

However, “./configure -disable-usb” still gave me the libusb error. Puzzling. I finally noticed the ./configure was reporting an error, right after starting.

% ./configure --disable-usb
configure: WARNING: unrecognized options: --disable-usb
...

Debugging the generated configure script is a pain. Time to dust off my ancient autoconf knowledge. The configure script starts with the configure.ac. I opened that up and searched for the disable-usb;

AC_ARG_ENABLE(libusb,
  AS_HELP_STRING([--disable-usb], [Disable libUSB support]),

The help string is there. But why doesn’t it work? Is it something simple?

 AC_ARG_ENABLE(libusb,
- AS_HELP_STRING([--disable-usb], [Disable libUSB support]),
+ AS_HELP_STRING([--disable-libusb], [Disable libUSB support]),

It might be this simple. Now how do I rebuild the configure script from the configure.ac? It’s been a long time but I remember a magic ‘autoreconf’.

% sudo dnf install autoconf automake
% autoreconf

The autoreconf failed with a complaint about AC_PYTHON_MODULE macro. (I’ve lost the actual message to the mists of scrollback.) Autoconf is built around m4. A quick Google search for AC_PYTHON_MODULE leads to an m4 macro library: https://www.gnu.org/software/autoconf-archive/

Download the tarball, ./configure && make && make install and then try autoreconf again. Works! git diff shows the Kismet configure script updated. Run the configure again with –disable-libusb and no complaints.

 

Continuing to Debug a OpenSSL Build

Continuing to tinker with the OpenSSL problems I’ve been having. Have a chunk of software whose building requires the older version of OpenSSL to be installed. But I want the newest OpenSSL dev package on my machine. We’re not going to get along.

I discovered I could selectively static link some of the libraries into the program. This was completely mind blowing feature to me. I started using the GCC linker way way back and was quite disconcerted by the -lfoo syntax which would find libfoo.a for linking.

In trying to force this software to build against a local copy of the older OpenSSL, I found problems with the final program trying to find the OpenSSL dynamic libraries. LD_LIBRARY_PATH would allow me to aim the software at my local lib but I was hoping there would be something simpler (that didn’t require setting that env var every time). Could I static link the executable? It’s been a while since I tried to static link a Linux app.

Quick google and found an even better answer:

https://stackoverflow.com/questions/6578484/telling-gcc-directly-to-link-a-library-statically

In the Makefile, I simply had to use -l:filename to link directly to a specific static library. With the -L flag, I could aim at a specific library location.

LDFLAGS += -lffi -lutil -lz -lm -lpthread -l:libssl.a -l:libcrypto.a -lrt -ldl

I had no idea this was possible with the linker. I’ve been using the GNU compilers for so many years and had no idea this was possible.

 

 

Debugging a Linux Hard Lockup

Building the linux-wireless-testing tree described in https://wireless.wiki.kernel.org/en/developers/documentation/git-guide   I have multiple MediaTek 7612 USB cards I’d like to get working. There’s exciting work going on in wireless to bring the MediaTek wifi into the mainline kernel.

I built the kernel on a Shuttle, loaded my new kernel. Plugged in the Alfa AWUS036ACM dongle, loaded up wpa_supplicant and my scripts. Desktop was unresponsive: hard lock up.

Power cycle. I’m on Fedora 28 which is systemd which uses journald. The journalctl -b -1 didn’t show any kernel panic.

Debugging a hard lock up of a kernel module seems easier than a kernel boot. The netconsole module https://www.kernel.org/doc/Documentation/networking/netconsole.txt will let me see the kernel log messages leading up to the lock-up.

I also need to investigate the watchdogs https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt

Netconsole.

% sudo modprobe netconsole @/enp3s0,5566@172.19.10.254/

[69677.819810] netconsole: unknown parameter '@/enp3s0,5566@172' ignored
[69677.819903] console [netcon0] enabled
[69677.819904] netconsole: network logging started

OK, what did I screw up. Is there a way I can tickle the kernel into logging a message to test my config? Not from userspace, but Google search found the following useful post. (Oh, StackOverflow (and related) is there nothing you can’t do?)

https://serverfault.com/questions/140354/how-to-add-message-that-will-be-read-with-dmesg/140358#140358

Load/unload the module. Still not seeing any messages. Oh, right. I always forget this step. https://elinux.org/Debugging_by_printing#Netconsole_resources

echo 8 > /proc/sys/kernel/debug

Skipping over the “unknown parameter” problem for now and just setting the parameters manually. Here is what I have:

...trillian:coconut% cd /sys/kernel/config/netconsole/
...trillian:netconsole% ls
arthurdent/
...trillian:netconsole% cd arthurdent/
...trillian:arthurdent% ls
dev_name enabled extended local_ip local_mac local_port remote_ip remote_mac remote_port
...trillian:arthurdent% cat remote_mac
70:88:6b:81:ac:64
...trillian:arthurdent% cat remote_ip
172.16.17.181
...trillian:arthurdent% cat remote_port
5566
...trillian:arthurdent% cat local_ip
172.16.17.92
...trillian:arthurdent% cat local_port
6665
...trillian:arthurdent% cat dev_name
enp0s31f6

So now run netcat on arthurdent, my other machine.

nc -l -u 5566

Can test my netcat from trillian by sending a udp packet:

ls | nc -u 172.16.17.181 5566

And now time to turn the crank and watch the chaos unfold. Start up my wpa_supplicant and wpa_cli script and boom crash.

[ 546.841947] wlp0s20f0u1u2: authenticate with c4:b9:cd:dc:48:40
[ 547.140202] wlp0s20f0u1u2: send auth to c4:b9:cd:dc:48:40 (try 1/3)
[ 547.140226] BUG: unable to handle kernel NULL pointer dereference at 0000000000000011
[ 547.140228] PGD 800000083e025067 P4D 800000083e025067 PUD 83e072067 PMD 0
[ 547.140233] Oops: 0000 [#1] SMP PTI
[ 547.140236] CPU: 2 PID: 2503 Comm: wpa_supplicant Not tainted 4.19.0-rc2-wt #1
[ 547.140238] Hardware name: Shuttle Inc. SZ170/FZ170, BIOS 2.09 08/01/2017
[ 547.140258] RIP: 0010:ieee80211_wake_txqs+0x1e3/0x3d0 [mac80211]
[ 547.140261] Code: 4c 89 fe 4c 89 ef 48 8b 92 b0 02 00 00 e8 45 d6 2e f4 48 8b 3c 24 e8 0c bc 00 f4 48 83 c5 08 48 3b 6c 2
4 08 74 a0 4c 8b 7d 00 <41> 0f b6 57 11 3b 54 24 18 75 e6 4d 8d a7 28 ff ff ff f0 49 0f ba
[ 547.140263] RSP: 0018:ffff95038ea83ed0 EFLAGS: 00010293
[ 547.140265] RAX: ffff95038419e978 RBX: ffff95038419e000 RCX: ffff950384b18760
[ 547.140267] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff950384b18828
[ 547.140269] RBP: ffff95038419e970 R08: 00039e862fcdb16e R09: ffff95038a9b4230
[ 547.140271] R10: 0000000000000420 R11: ffffb8efc46c78d0 R12: ffff9503798251d0
[ 547.140273] R13: ffff950384b18760 R14: 0000000000000000 R15: 0000000000000000
[ 547.140275] FS: 00007f42ed60cdc0(0000) GS:ffff95038ea80000(0000) knlGS:0000000000000000
[ 547.140277] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 547.140279] CR2: 0000000000000011 CR3: 0000000835b30001 CR4: 00000000003606e0
[ 547.140281] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 547.140283] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 547.140284] Call Trace:

(Crash truncated here.)  Posted to the linux-wireless mailing list and am now getting some help.

 

Debugging an OpenSSL Version Problem.

I’m working in a system using an older version of OpenSSL. The system builds both a cross compiled version of Python and the same version of Python built for the host. Having the same version of python both in the embedded host and the local host allows me to run the Python scripts locally on the same exact same version of Python. I can test my Python scripts locally without having to push them to the remote firmware, which is slow and expensive.

Python uses OpenSSL. The embedded Python build used a local repo copy of the 1.0.x series OpenSSL. Consequently, the host Python build also needs to use the same older 1.0.x series OpenSSL. But the host Python build used the system (default) installed dev version of OpenSSL. I would have had to remove the current version of OpenSSL and install the older version. I was troubled by that requirement.

I built the OpenSSL 1.0.x, installed it to a $HOME tree. I then had to do the hard part of aiming the system’s host Python build at the older version. Seemed simple enough: find where CFLAGS and LDFLAGS was defined, change those to aim at my local OpenSSL.

CFLAGS+=$(HOME)/include/openssl

LDFLAGS+=$(HOME)/lib

But it wasn’t working. I knew it wasn’t working because the Python link would report not being able to find the newer OpenSSL symbol names so I knew the build was still using the newer version of header files.

I needed to debug my changes to the build. Along the way, I found some useful GCC options.

Debugging a header file include problem is straightforward: sabotage the build. I added “#error foo” to the host’s /usr/include/openssl/rsa.h and “#error bar” to the old openssl/rsa.h   Fun “make -B Modules/_ssl.o” and see which include file was being hit.  (The -B flag to make forces Make to rebuild the file regardless of timestamp.)  The build would fail with “error foo”. I was still getting the system header. I needed to see where GCC was finding its header files.

https://stackoverflow.com/questions/344317/where-does-gcc-look-for-c-and-c-header-files

specifically https://stackoverflow.com/a/41718748/39766

The set of paths where the compiler looks for the header files can be checked by the command:-

cpp -v

I added the ‘-v’ flag to the build, asking GCC to pass it along to the preprocessor.

gcc -Xpreprocessor -v $(CFLAGS) -c [etcetera]

Output looks like:

#include "..." search starts here:
#include <...> search starts here:
/home/dpoole/include/openssl
...
/usr/lib/gcc/x86_64-linux-gnu/7/include
/usr/local/include
/usr/lib/gcc/x86_64-linux-gnu/7/include-fixed
/usr/include/x86_64-linux-gnu
/usr/include
End of search list.

Oops. First include should have been /home/dpoole/include not /home/dpoole/include/openssl   Fixed my CFLAGS and I’m now building Python against my local copy of openssl.

Python crashes on startup but progress!

Download your Yahoo Email, Unix Edition

First Off.

I am providing this information because I found it useful. I was inspired by Iain Thompson of The Register (http://www.theregister.co.uk/Author/2395/) to retrieve and delete my 15+ years of email.

Disclaimer: Computers suck. Also, I could be a complete idiot. I’m just some rando blog on the internets. Do the following at your own risk and please don’t hurt me if something goes wrong.

Prepare Your Account

https://login.yahoo.com/account/security

Disable Two-Step Verification

Enable “Allow apps that use less secure sign in”.

screen-shot-2016-10-10-at-7-51-42-am

Remember to turn them back once email download is done. Then delete your Yahoo account anyway.

Requirements.

Need to have fetchmail and maildrop installed. I’m an Ubuntu user so I just did:

sudo apt install fetchmail maildrop

 

fetchmail

~/.fetchmailrc

poll pop.mail.yahoo.com
   service 995
   protocol POP3
   user "myemail@yahoo.com"
   ssl
   password "mypassword"
   fetchall
   mda "/usr/bin/maildrop"

Verify with ‘fetchmail -v -c’  (verbose and check). Should see a successful login to your Yahoo email.

NOTE: fetchmail’s default behavior over POP3 is to *DELETE* your email once retrieved. I’ve left this behavior in place. If that’s not what you want, consult the fetchmail man page.

Maildrop

/etc/droprc

Enable DEFAULT=”$HOME/Maildir” to push mail straight to my account instead of through a MDA.

~/.mailfilter

Not needed.

Build a Maildir

From home directory, run

maildirmake.maildrop Maildir

Will create the necessary Maildir tree. Use Maildir rather than mbox format because writing to a single file (mbox) is risky; could be corrupted in event of a crash. Maildir writes every email to a separate file.

Test Test Test!

Run  fetchmail -v -B 1

Will fetch verbose ONE message for testing the configuration. Should see login and one message downloaded. (deep-thought is my hostname)

Here’s me after downloading three messages:

…deep-thought:~% find Maildir

Maildir
Maildir/cur
Maildir/new
Maildir/new/1476108117.M788587P6060V0000000000000801I0000000002F42B11_0.deep-thought,S=34486
Maildir/new/1476107958.M23066P5120V0000000000000801I0000000002F42B10_0.deep-thought,S=49456
Maildir/new/1476107905.M287749P5102V0000000000000801I0000000002F42B0F_0.deep-thought,S=25386
Maildir/tmp

 

Release the Hounds.

Let’s go for it. Run

fetchmail -v | tee log.txt

|tee log.txt to save a log of the run in case anything goes wrong.