C Cross Compiler on Ubuntu Linux

Sources

Introduction

The cross compiler will use the System V ABI (Application Binary Interface). An ABI defines how to machine language programs interface with each other. In this case the kernel and applications that will run on it. Also you can write libraries for your operating system which you can compile applications against. The library and the applications using the libraries have to be able to talk to each other. The ABI defines amongst other things how parameters to a function are put into registers and onto the stack. It that interface is defined, two applications adhering to the interface can talk to each other.

Switching to a certain version of GCC via the alternatives system

I compiled the sources of gcc-4.9.2 on Ubuntu 19.04 after installing and changing the alternatives to gcc-6 as outlined in https://askubuntu.com/questions/26498/how-to-choose-the-default-gcc-and-g-version and applying the patch from https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=ec1cc0263f156f70693a62cf17b254a0029f4852

setterm -linewrap off

sudo apt-get install gcc-6 g++-6
sudo apt install libc6-dev

dpkg -l | grep gcc | awk '{print $2}'

sudo update-alternatives --remove-all gcc
sudo update-alternatives --remove-all g++

sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 10
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-6 10
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
sudo update-alternatives --set cc /usr/bin/gcc
sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30
sudo update-alternatives --set c++ /usr/bin/g++
sudo update-alternatives --config gcc
sudo update-alternatives --config g++

export CC=/usr/bin/gcc
export LD=/usr/bin/ld

 

You have to apply this patch!!!!! https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=ec1cc0263f156f70693a62cf17b254a0029f4852

Installing the prerequisits

According to https://packages.ubuntu.com/search?keywords=libcloog-isl-dev the libcloog-isl-dev package is not part of the ubuntu 19.04 disco dingo release and it is not advised to install packages from older releases. cloog is optional anyways so it is not used in this explanation.

sudo apt-get update
sudo apt-get install build-essential flex bison libgmp3-dev libmpc-dev libmpfr-dev texinfo

Install and build:

####################################
echo Stage 1 - Building Dependencies
####################################

setterm -linewrap off

# make a working directory
cd $HOME/dev
rm -rf cross
mkdir cross
cd cross

# install or update all apt-get dependencies
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gcc                   # not cross
sudo apt-get install g++
sudo apt-get install make
sudo apt-get install bison
sudo apt-get install flex
sudo apt-get install gawk
sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev libmpfr-doc 
#sudo apt-get install libmpfr4 libmpfr4-dbg
sudo apt-get install mpc
sudo apt-get install libmpc-dev
sudo apt-get install texinfo               # optional
#sudo apt-get install libcloog-isl-dev      # optional
sudo apt-get install build-essential
sudo apt-get install glibc-devel
sudo apt-get -y install gcc-multilib libc6-i386

# download and unpack necessary files
wget http://ftpmirror.gnu.org/binutils/binutils-2.25.1.tar.gz
wget http://ftpmirror.gnu.org/gcc/gcc-5.3.0/gcc-5.3.0.tar.gz
wget http://ftpmirror.gnu.org/gcc/gcc-4.9.2/gcc-4.9.2.tar.gz
wget http://ftpmirror.gnu.org/gcc/gcc-4.9.0/gcc-4.9.0.tar.gz
wget http://ftpmirror.gnu.org/gcc/gcc-4.8.3/gcc-4.8.3.tar.gz
wget http://ftpmirror.gnu.org/mpc/mpc-1.0.3.tar.gz

# unzip all archives
#for f in *.tar*; do tar zvxf $f; done

rm -rf binutils-2.25.1
tar zvxf binutils-2.25.1.tar.gz

rm -rf gcc-4.8.3
tar zvxf gcc-4.8.3.tar.gz

rm -rf gcc-4.9.2
tar zvxf gcc-4.9.2.tar.gz

# create installation directory
cd $HOME/dev/cross
mkdir install
export PREFIX="$HOME/dev/cross/install"
#export TARGET=i686-elf
export TARGET=i386-elf
export PATH="$PREFIX/bin:$PATH"

################################
echo Stage 2 - Building Compiler
################################

## install mpc
#cd $HOME/dev/cross
#mkdir build-mpc
#cd build-mpc
#../mpc-1.0.3/configure --prefix="$PREFIX"
#make -j2
#make -j2 check
#make -j2 install
#cd ..

# install binutils
cd $HOME/dev/cross
rm -rf build-binutils
mkdir build-binutils
cd build-binutils
../binutils-2.25.1/configure --target=$TARGET --prefix="$PREFIX" --with-sysroot --disable-nls --disable-werror
make -j2
make -j2 install
cd ..

# install gcc
cd $HOME/dev/cross
rm -rf build-gcc
mkdir build-gcc
cd build-gcc

#../gcc-4.8.3/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --enable-languages=c,c++ --without-headers --with-mpc="$PREFIX"

#../gcc-4.8.3/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --enable-languages=c,c++ --without-headers



../gcc-4.9.2/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --enable-languages=c,c++ --without-headers

make -j2 all-gcc
make -j2 all-target-libgcc
make -j2 install-gcc
make -j2 install-target-libgcc

Build Errors:

../../mpc-1.0.3/src/mul.c:175:1: error: conflicting types for ‘mpfr_fmma’

cfns.gperf:101:1: error: ‘const char* libc_name_p(const char*, unsigned int)’ redeclared inline with ‘gnu_inline’ attribute

The cross compilers are in

~/dev/cross/install/bin

 

Design of the Operating System

Purpose of this Article

As a beginner in operating system development it is difficult to find your way through the material that is out there. There are no definite guides in implementing an operating system because most of the authors have thought themselves by reading Intel’s developer manuals. A lot of the books are therefore opinionated. Operating Systems become a very personal subject all of a sudden.

In general free tutorials teaching os development in a concise manner that go beyond a hello world boot loader are scarce. James Molloy wrote an excellent set of articles that even explain paging and a heap implementation from scratch line by line! James articles do explain the implementation of more concepts than any other article on the internet.

The problem is that some other sites on the internet attack his implementation mentioning problems in James’s code. James also mentions concepts but does not thouroughly explain why he wants to adhere to those concepts. For example he says kmalloc should not be called after identity mapping but before a heap is initialized. He never says what errors occur should this concept be ignored!

On the other hand you have the heaps of academic books that explain broad concept in nicely written and well organized fashion! The problem is that academic material is too high level to implement anything. Implementation deatils are left out of the picture. A student could leave a course with a broad understanding of what an operating system does but at the same time that student is not able to implement any of the concepts because the understanding is too high level. In a sense the student knows everything and nothing at all at the same time! The knowledge becomes useless beyond passing an exam. Only the most brilliant minds will be able to transform an academic book into a working operating system. Us mere mortals, we need material that gets us started with a basic implementation so we can learn the steps it takes to write an operating sytem.

I think the problem is that in order to be able to write an operating system, you have to be able to find a way to dive down from the level of abstraction displayed in academic books to the very low level tutorials of the web. Articles or books that provide those intermediary steps do not exist. You have to wade through the heaps of opinionated articles and find your own why. You have to establish a plan of what to implement in what order. In order to create an implementation plan, you have to manifest all the concepts in your brain into a concrete architecture not knowing yet if the architecture will hold up. You should not let yourself get discouraged by opinionated, elitist posts on the internet and keep on working on your own implementation.

This article outlines my personal architecture and ideas of how an operating system could be implemented. It is heavily influenced by James Molloys articles and tries to solve problems that might arise due to James implementation as mentioned on the OSDev wiki.

I have no idea if all that is written in this post is correct or not. If I find myself in a situation that I cannot solve based on my current understanding, I will update this architecture with changed architecture which does not have the problems of the architecture before it.

Besides James Molloy’s influence, the architecture is heavily influenced by Unix concepts, because almost all books on operating systems are based on the Unix operating system, I almost exclusively think in Unix terms when I think about operating systems at this point. Although things like fork before exec do not intuitively make sense to me, (why fork? Why not just exec?) I can see however that an implementation of fork and exec is possible and leads to new processes. So I will implement this concept. Especially since it aligns with James Molloys ideas.

Overall Design

The operating system will use paging. It will not swap to a hard drive in the early versions. If all physical memory is used, no process will receive more memory. A process has to be able to deal with this situation. It will use paging to secure the kernel frames from being written to by user mode code and also to facilitate the creation of new processes by copying physical frames for isolation of processes while maintaining the same virtual address space for the copied process. It will also use paging to map the operating system’s frames to the bottom of every running process.

The kernel’s page directory and page tables are stored in identity mapped frames, although I still do not 100% understand why identity mapping is needed. That is not true! The page directory and page tables have to be managed by the heap and they have to belong to the init process. They should be copied (not mapped) on fork(). Because fork() will copy everything above the last used kernel frame, the page directory and page tables have to be located above the last identity mapped kernel frame.

The kernel’s frames are located at the bottom of the physical memory starting from 0 (Where does GRUB put the kernel? Also where is the stack placed by GRUB?). Also code placed by the BIOS is contained in the low frames of the memory map. See http://www.cs.bham.ac.uk/~exr/lectures/opsys/10_11/lectures/os-dev.pdf – Figure 3.4) The kernel’s frames and the BIOS’s frames have to be marked as occupied in the bitmap of used frames to save them from being used twice. This is how the kernel prevents code from overriding it in the early stages of the boot process.

Before activating a heap, the kernel will just use static memory (placement memory system). It will start with an address (read from GRUB’s multiboot information) and it will move that pointer (placement_address) up the address space whenever it needs memory. This memory is never returned as long as the OS runs and hence no heap is necessary for this simple placement memory system.

Calls to fork() will map (not copy) all lower frames into the newly forked processes memory space. Overall fork will map

  • The frames created by the BIOS
    • contains the interrupt vector table
  • The kernel code installed by the bootloader GRUB
  • The modules that where placed into memory by GRUB above the kernel
  • The kernel’s placement memory created via the placement memory system
    • The placement memory contains the bitmap of used frames

Program Flow and the Init Process

When the OS starts, it eventually creates the init process. (It also creates an entry for the init process in its list of running processes. This list is the array of process descriptor data structures. That list is also updated by fork(). Because the list is maintained in the kernel memory section at the bottom of the memory map, it has to be static in size, because once the kernel initializes init’s heap, its memory remains fixed to keep it from growing up the address space into any other area of the memory trashing processes along the way. The kernel can therefore only run a fixed amount of processes. Maximum 16 for the start maybe?)

The program flow starts after the bootloader (GRUB) has set the CPU’s instruction pointer onto the kernel code that it has placed into memory. The CPU starts to execute the kernel code. The kernel at this early stage has no processes running, it is the only part executing.

The kernel will

  • Set GDT and interrupt tables
  • Create its own stack
  • Prepare the array of process descriptors
  • Create the bitmap of frames. All frames are free yet.
  • Prepare frames (above the static kernel memory) and put Page Directory and Page Tables into those frames (The page table entries do not point to frames yet!)
  • Identity map the kernels frames and mark them used in the bitmap of frames.
  • Assign frames to the Page Table Entries prepared above and marks the frames as used in the bitmap of frames
  • Initialize the heap
  • Start the init process and hands the program flow over to the init process

Once the init process has received the program flow, it is taking care of all further tasks. It will read the harddrive to find a configuration file. The configuration file describes, which process init should start. In most cases, the new process will be the console process (user space) which reads from stdin and outputs to stdout and can fork() and exec() new processes via system calls.

At some point the program flow has to go into a scheduler which assigns processing time to all running processes. It will run the init process, the console process and all forked and execed processes in their respective time slices.

Back to the init process that just took over the flow from the kernel. The init process will inherit and use the operating system’s stack and its first Page Directory and Page Tables. In the future, whenever init forks itself (maybe to prepare a call to exec later on), the kernel’s/init’s stack will be copied to the new processes by in-memory copying the frames that the kernel stack occupies. The newly copied stack will then be available under the same virtual memory address as the kernel’s/init’s stack (no pointers have to be moved around). The new process has all pointers functioning because the virtual memory looks the same as init’s virtual memory. The frames underneath the copied stack are copies of init’s frames and the new process cannot affect init’s original stack. James Molloy copies frames which is fine but then proceeds to moves pointers around, I think this is not necessary. OSDev wiki also say that it is harmful.

During fork() the new process will not make a copy of the frames that the operating system uses (except the stack as explained above) but it will just map them into the new processes virtual address space. Mapping means that some of the Page Table Entries will receive a pointer to the kernel’s physical memory. That pointer is copied from the parent process as is without any changes.

Overview of Frames

The operating system uses frames for:

  • The frames where GRUB loaded the kernel sections into.
  • The frames that where allocated by the OS before a heap was activated (= identity mapped frames)
  • The frames used to manage running process information. This is the fixed size array of process descriptor data structures.
  • The frames used by the bitmap of free or occupied physical frames.
  • The frames used by the operating systems Page Directory and Page Tables (more or less owned by the init process).
  • Its own stack (more or less owned by the init process also).
  • The frames used by the heap (more or less owned by the init process).

The latter three types of frames are passed over by the operating system to the init process. The reason is that every process should have

  • it’s own virtual memory address space (Page Directory and Page Tables)
  • it’s own stack
  • It’s own heap

By assigning those three objects to the init process, they can be in-memory copied for new, forked processes and they are not merely mapped.

Because the criteria for fork() to decide whether to copy or to fork is the last frame used by the OS before the heap was activated. Everything below that frame is mapped instead of copied. Everything above that frame is copied (cloned, duplicated). The stack, heap and the Page Directory and Page Tables have to be place above the last used OS frame in the memory map so they are copied and not mapped.

Also frames used by the Page Directory and Page Table will not be statically allocated by the OS! Instead they should be allocated via the heap. The reason is that a process has to have its own virtual address space and has to be able to grow or shrink its own Page Directory and Page Tables (= virtual address space) via the heap. That means that before adding the first Page Directory and Page Tables by the OS for the init proces, a heap implementation has to be activated so init immediately starts to behave like a normal process. init has to be a fully functional template for all other processes forked from it.

The reason for mapping the OS frames is that the operating systems code, static resource data and the memory that it allocated before the heap is active will remain unchanged as long as the OS is running. Because those frames are static, there is no reason to make physical copies of those frames. How does fork() know which frames to copy and which to map? All frames from 0 to the last frame allocated before the heap was activated are mapped. Those are the frames the operating system uses. All other frames (Stack, the processes data) are copied.

Process Descriptors, List of Process Information

What is stored in a ProcessDescriptor? A ProcessDescriptor is created for each running process. The ProcessDescriptors are stored in the kernels static memory, hence their number must be limited to a maximum value (16 for the beginning). Each Process Descriptor contains:

  • The process identifier PID, a numeric value identifying the process amongst all processes. 0 is not a valid PID because fork returns 0 in the child process after fork.
  • A pointer to a physical address. That physical address contains the processe’s Page Directory Table. This pointer is needed to enlarge the memory managment structures Page Directory and Page Table if the process requests more memory. Also the memory management structures have to be in-memory copied during a fork().
  • A pointer to the heap.
  • A pointer to the stack.
  • A place in memory where registers can be stored to conserve the CPU state.

Running a Program

The Console

Executing a program starts with the user typing a command in the console/shell. According to the comet book (Operating Systems – Three easy Pieces) the shell is a normal user mode program.

It will by some means find the executable binary file that implements the command. It calls the kernel function fork() to copy itself. It will then call the kernel function exec() to replace the code segment of the copy with the code segment read from the executable program that implements the command. It will the let the new forked process run and wait on its termination. It will then show the commands return value on the console.

No console

To make things easier during development, interaction with the operating system is not needed and the console can be left out. Identifying a program binary to execute, calling fork() and exec() can just be done in the kernel_main() method. The console can be left out of the picture during early developments.

Under Linux, the first process that is started is the init process. A first process could be started from the kernel_main() method similar to init.

Why fork() and exec()?

The question is could an application not be started by creation of a completely new process without forking an existing process and without using exec()? The comet book says, the approach of fork() and exec() is just the right thing to do. Is that true or not? Could it be done easier?

James Molloys Paging and Heap

Why is it not safe to call kmalloc between identity mapping and once the heap is active

James Molloys tutorials are the best source I can find about implementing paging including a heap.

According to osdev wiki the code has some flaws but there is no other write up which goes into the same detail as James Molloy so I read his tutorials quite a bit and I take away a lot from them.

I had a hard time understanding why James Molloy states that between identity mapping the frames and activating the heap, calls to his kmalloc() function are prohibited and the placement_address memory pointer should not be moved.

I think I finally figured out, why he organizes his heap setup code the way he did it and why he does not want to call kmalloc after identity mapping frames and before the heap is functional.

The reason is that in his initialize_paging() function, during identity mapping he iterates over all frames from 0x0000 up to the current value of placement_address. The area between 0x0000 up to placement_address contains everything that was loaded into memory by grub, that means the kernel code and all the data the kernel uses. The area also contains the Page Directory and Page Tables created so far. It also contains the heap’s Page Tables and Page Table Entries. This are should not be overridden ever otherwise the system will crash. To prevent the area from being overridden, the frames covering this area are allocated by calling alloc_frame(). Once a frame is allocated, it is marked as used and will never be handed out to any other program by the heap. Allocating frames from 0x0000 to placement_address is also called the identity mapping loop:

i = 0;
while (i < placement_address+0x1000)
{
    // Kernel code is readable but not writeable from userspace.
    alloc_frame( get_page(i, 1, kernel_directory), 0, 0 );
    i += 0x1000;
}

If at this point, kmalloc is called, kmalloc would move placement_address further and it would use memory that is not secured by a frame and hance could be overriden if a frame is created over that data and handed out to an application by the heap. That is why after identity mapping, James Molloy does not want kmalloc to be called.

In his tutorials he then changes the code of kmalloc to use the heap if the heap is finally activated. That means if the heap is initialized, kmalloc will not use placement_address any more but it will use the heap and it is safe to call kmalloc again.

That is why he says: kmalloc should not be called between identity_mapping and the point when the heap is ready.

Why is the heap paging code split in two parts?

If you look at initialise_paging() you can see that pages are created for the heap before identity mapping and frames are assigned to the pages after identity mapping().

By creating pages for the heap, I mean that a Page Table Entry is requested. Requesting a Page Table Entry can potentially trigger a call to kmalloc as if a Page Table is full while a new page is requested, a new block of memory has to be allocated to house a new Page Table. This basically means that in order to use paging, there is a overhead for management/meta data which is Page Directory, Page Directory Entry, Page Tables and Page Table Entries.

James Molloy first creates Page Tables and Page Table Entries for the heap, without allocating frames for the Page Tables and Page Table Entries. He then identity maps the area from 0x0000 up to placement_address and after that allocates frames for the heap entries.

The reason is that he wants the heap page tables and entries to be stored within the area 0x0000 up to placement_address. He wants that data to be located in the identity mapped frames. He does not assign frames to the heaps Page Table Entries because if he would assign frames, they would be placed at the address 0x0000 because no frames are used yet.

To understand why the first allocated frame goes to 0x0000, you have to understand how James Molloy decides which frames should be used next. To decide which frame to use, he maintains a bitmap that points to all frames from 0 to MAX_FRAME in that order and contains the information whether that frame is used or still available. Whenever a frame is needed, the next free frame in that order is used.

Because no frames have ever been used before the heap is created, the first free frame is frame 0x0000. The heap frames should not be located at 0x0000 because 0x0000 has to contain the kernels frames so they are identity mapped. That is why the heap frames are only assigned after the identity mapping has used all the frames it needs to cover the kernel. The heap frames are then taken from some part of the physical memory but not from the kernels identity mapped area.

The Memory Map

A diagram helps to understand the situation. The diagram shows the memory map used by James Molloy. The memory starts from 0x0000 at the bottom and goes up to where placement_address is currently located. The linker script puts the kernel code .text, .bss and all other sections into the memory between 0x0000 and placement_address.

The kernel will have the information where the linker has put the last section. The kernel will that initialize placement_address to a memory location after the last loaded section. placement_address will then grow upwards, whenever kmalloc is called before the heap is active. All kmalloc does without heap is just to move placement_address upwards so that the caller can use the memory without the memory being used twice by some other program. Once the heap is active, the next free frame in the bitmap of frames is used when someone requests memory and a free frame could be anywhere in RAM.

What happens during identity mapping can be visualized on the diagram pretty well. identity mapping is basically looping over the memory from 0x0000 up to placement_address, covering the entire RAM by frames along the way. The frames are marked as used. Once the iteration is done, all kernel code, kernel data and all files and modules loaded by grub are secured by frames with a set used flag and noone will override that data because the heap will not return those frames to any other process.

Single Board Computers (SBC)

x86

 

STM32 Todo

  • Learn how to decompile a binary
  • Learn about Ethernet:
    • https://www.carminenoviello.com/2015/08/28/adding-ethernet-connectivity-stm32-nucleo/
    • https://os.mbed.com/cookbook/Ethernet-RJ45
    • https://www.sparkfun.com/products/716
    • https://www.carminenoviello.com/2016/01/22/getting-started-stm32-nucleo-f746zg/
    • https://en.wikipedia.org/wiki/LwIP
    • https://www.st.com/en/evaluation-tools/nucleo-f746zg.html
  • Learn about MBed OS https://www.mbed.com/en/
  • Learn about the MBed online compiler https://os.mbed.com/ then on the top bar, click on compiler
  • Try the STM32CubeIDE https://blog.st.com/stm32cubeide-free-ide/
  • Try to install an Eclipse IDE for STM32
    • https://www.carminenoviello.com/2014/12/28/setting-gcceclipse-toolchain-stm32nucleo-part-1/
    • https://www.carminenoviello.com/2015/01/07/setting-gcceclipse-toolchain-stm32nucleo-part-2/
    • https://www.carminenoviello.com/2015/01/16/setting-gcceclipse-toolchain-stm32nucleo-part-iii/
    • https://www.carminenoviello.com/2015/06/04/stm32-applications-eclipse-gcc-stcube/

STM32 Compile Applications

Compile on Ubuntu Linux

sudo add-apt-repository ppa:team-gcc-arm-embedded/ppa
sudo apt-get update
sudo apt-get install gcc-arm-none-eabi

Test the installation:

arm-none-eabi-gcc --version

Should output something similar to this:

arm-none-eabi-gcc (GNU Tools for Arm Embedded Processors 7-2018-q3-update) 7.3.1 20180622 (release) [ARM/embedded-7-branch revision 261907]
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

 

cd ~
cd dev
mkdir helloworld
cd helloworld
vi main.c

paste this text:

int
main(void)
{

while (1);
}

Compile the program:

arm-none-eabi-gcc -std=gnu99 -g -O2 -Wall -mlittle-endian -mthumb -mthumb-interwork -mcpu=cortex-m0 -fsingle-precision-constant -Wdouble-promotion --specs=nosys.specs main.c -o main.elf

You will get an .elf file. It is not an .bin file and cannot be flashed.
https://stackoverflow.com/questions/49680382/objcopy-elf-to-bin-file

Check the elf file

arm-none-eabi-readelf -h main.elf

Convert the .elf file to .bin (Removes the .elf metadata and only leaves raw machine code for the microcontroller to execute)

arm-none-eabi-objcopy -O binary main.elf main.bin

You can now flash the bin file using st-link.
Connect the board using usb.
Check that the board is connected:

/home/wbi/dev/stlink/build/Release/st-info --probe

Flash the binary onto the board. Warning: Flashing a binary onto the board will override the previous content of the board without performing any backup! The previous content is lost and cannot be brought back! If you want to safe the content, first read the flash. Reading the flash is described in another article.

st-flash write main.bin 0x08000000

STM32 Reading the flash memory

Why read the flash

Before flashing my own software into the stm32, I wanted to download and store the preinstalled example program. To download the preinstalled application, it is necessary to read the 512kb flash memory to a file on disk.

Mac OS

On mac, the easiest way to read the 512KB flash is using the STM32CubeProg application which is available for Mac, Linux and Windows. It is available from here. Please do not confuse the STM32CubeProg (STM32 Cube Programmer) tool with the STM32CubeIDE (available here).

On Mac, the only problem is installing the STM32CubeProg. Clicking the .app file from the zip does nothing. Instead the only way to install the application is to use the tip from this page. The post says to execute a java command on the .exe file on mac which actually works and installs the application just fine.

sudo java -jar SetupSTM32CubeMX-4.22.0.exe

The STM32CubeProg allows to select an address to read from and an amount of bytes to read.

The flash has a size of 512 kilobyte. The hex equivalent of 512kb is 0x80000. Reading 0x80000 from the address 0x08000000 using a Data Width of 32bit, will read the flash content.

First plugin the STM32 board using a USB-Cable. When starting up the STM32CubeProg application, it says it is not connected. Click the connect button. The application will automatically detect your board and set correct parameters. It will prefill the address field with 0x08000000 and the Size with 0x400. Replace the size by 0x80000 to read 512kb. Click the Read button and wait until the application does refresh itself. Then, toggle the Read button from Read to Save As… by opening the dropdown and selecting Save As… from the options. Then click the button and store the file onto your harddrive.

Linux

On Ubuntu Linux, you can install a USB driver and the st-link application to access the flash of the STM32 Nucleo F446RE.

sudo add-apt-repository ppa:team-gcc-arm-embedded/ppa
sudo apt-get update
sudo apt-get install gcc-arm-none-eabi

Test the installation

arm-none-eabi-gcc --version

Should output something similar to this:

arm-none-eabi-gcc (GNU Tools for Arm Embedded Processors 7-2018-q3-update) 7.3.1 20180622 (release) [ARM/embedded-7-branch revision 261907]
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Install the USB driver:

sudo apt-get install libusb-1.0-0-dev

Install st link by checking out the git repository and compiling the source code:

git clone https://github.com/texane/stlink.git
cd stlink
make release
cd build/Release
Test:
/home/wbi/dev/stlink/build/Release/st-flash

/home/wbi/dev/stlink/build/Release/st-info
/home/wbi/dev/stlink/build/Release/st-info --version
/home/wbi/dev/stlink/build/Release/st-info --flash
/home/wbi/dev/stlink/build/Release/st-info --sram
/home/wbi/dev/stlink/build/Release/st-info --descr
/home/wbi/dev/stlink/build/Release/st-info --pagesize
/home/wbi/dev/stlink/build/Release/st-info --chipid
/home/wbi/dev/stlink/build/Release/st-info --serial
/home/wbi/dev/stlink/build/Release/st-info --hla-serial
/home/wbi/dev/stlink/build/Release/st-info --probe

Read the flash

https://github.com/texane/stlink/issues/644
stlinkv2 command line: ./st-flash [--debug] [--reset] [--serial <serial>] [--format <format>] [--flash=<fsize>] {read|write} <path> <addr> <size>
st-flash --format binary read mydump.bin 0x08000000 0x100
/home/wbi/dev/stlink/build/Release/st-flash --format binary read /home/wbi/temp/stmnucleof446RE/mydump.bin 0x08000000 0x200
/home/wbi/dev/stlink/build/Release/st-flash read /home/wbi/temp/stmnucleof446RE/out.bin 0x08000000 0x80000

Designing a Desktop Application with C#

Layers, MVVM and organizing the Code into Projects

According to the PluralSight Course “Getting Started with Dependency Injection in .NET” by Jeremy Clark, a desktop application can be structured in four layers.

  • The View Layer contains the UI elements of the application such as buttons and list boxes.
  • The Presentation (Logic) Layer contains the business logic that drives the application and controls the View Layer.
  • The Data Access Layer contains code that interacts with the data store. A datastore can be a WSDL-WebService, a REST API, a database or any other data provider.
  • The Data Store provides the data.

After defining the four layers, Jeremy goes on to map those four layers to the MVVM pattern. MVVM stands for Model – View – ViewModel.

  • The View Layer maps to the View of the MVVM pattern.
  • The Presentation Logic Layer maps to the ViewModel of the MVVM pattern.
  • The Data Access Layer and the Data Store both map to the Model of the MVVM pattern.

Jeremy uses one separate C# project in the solution per Layer and one project for common code shared amongst the layers. In total the solution contains five projects.

The project of Jeremy’s Data Store Layer contains a ASP.NET Core project that provides a dummy REST-API. I can imagine that there are situations in which the Data Store Layer project does not add any additional benefit to your specific use case and I would go on to say that the Data Store Layer project is optional.

The Data Access Layer

The Data Access Layer has to retrieve or send data to and from the data store. The integration with the data store and the data format used should be of no concern to any of the upper layers. The Data Access Layer shields the upper layers from the details of communication and integration as best as possible.

One way to achieve this separation is that the Data Access Layer talks to the upper layers via domain objects and it has the responsibility to convert to and from those domain objects, when it talks to the datastores.

The Data Access Layer contains ServiceReader interfaces and classes that implement those interfaces. The interfaces define an API that uses domain objects. Internally a ServiceReader implementation uses a WebClient to talk to REST APIs or a SSH- or TelnetClient to speak those protocols.

It also contains converters. Converters are generic interfaces and implementations that have the task to convert from and to domain objects. They contains and isolates the mapping logic to convert between domain objects and the form of serialization that is used to talk to the datastore at hand. A converter can also be unit tested easily. Converters can be nested to deal with complex data structures.

Data is retrieved from the data store via the client, put into the converter to retrieve domain objects and then the domain objects are returned via the ServiceReaders API. Sending data to a data store also goes through the API, a converter and finally a client.

If JSON is the method of serialization, you can use NewtonSoft Json. For WSDL webservices use ???. For Telnet use ??? For SSH use ??? For HTTP you should use a modern asynchronous HTTP client (Which one ???)

The Presentation Layer

Contains ViewModel classes (e.g. EmployeeViewModel, PeopleViewModel, …) that implement INotifyPropertyChanged and other interfaces used to connect them to the View Layer.

The ViewModel classes contain properties (= members) that are databound to the View Layer.

The ViewModel classes finally contain ServiceReader – Interface member variables that get the ServiceReaders Implementations from the Data Access Layer injected.

The View Layer

The view layer contains GUI descriptions in .xaml files. The UI components defined in the .xaml files are backed by window classes. The UI components are bound to properties of their respective windows classes.

The data that the windows classes provide to the GUI components via their bound properties comes from the Presentation Layer’s ViewModel classes. The ViewModel objects are injected into the window classes.

Docker and ASP .NET Core

How to dockerize a .NET Core application

Install docker on your machine.
Test the installation
$ docker run hello-world
The expected output is: Hello from Docker!

Read https://docs.docker.com/get-started/

The goal is to build an image that you can copy to the target machine and start to create a running container.
The container will contain all applications needed to run your app.

A container is launched by running an image. An image is an executable package that includes everything needed
to run an application — the code, a runtime, libraries, environment variables, and configuration files.

As an analogy: A docker-image is a class, a docker-container is an instance of a class.

First, you have to create a docker image on the development machine.
Docker images are created from docker-files (called Dockerfile without an extension) which contain statements that docker
will execute to create the image.

Go to the folder that contains the solution (.sln) file.
Create a file called ‘Dockerfile’ without extension.
Edit the Dockerfile in a text editor.

Here is my example Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
#FROM mono:6.0.0.313-slim AS build
WORKDIR /app

# debug output
RUN pwd
RUN hostname
RUN uname -r

# install npm
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get update && apt-get install -y nodejs

# copy csproj and restore as distinct layers
COPY *.sln .
COPY clone_angular/*.csproj ./clone_angular/
RUN dotnet restore

# copy everything else and build app
COPY clone_angular/. ./clone_angular/
WORKDIR /app/clone_angular

RUN npm install
RUN dotnet publish -c Release -o out

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
WORKDIR /app
COPY --from=build /app/clone_angular/out ./
ENTRYPOINT ["dotnet", "clone_angular.dll"]

 

Execute docker to build the docker image from the Dockerfile.
Navigate to the folder that contains the docker file.
$ docker build --tag=<TAG_NAME> .

‘docker build’ will run the Dockerfile.
During the execution, the operating system is 4.9.184-linuxkit.
So you are actually running a linux and apt-get is available for installing software.

On that linuxkit, there is no software installed.
If your build requires any tools, you have to install them on the linuxkit.
For example if your application uses angular, you will need node and npm you have to
install those tools first before building your app.
To install software, prepare a working installation command, then add a RUN command
to the dockerfile and paste the install command after it.

Once the dockerfile was executed, check if your docker installation lists your new image:
$ docker image ls

You should see

REPOSITORY TAG IMAGE ID CREATED SIZE
<TAG_NAME> latest 8797820ed5c5 3 minutes ago 262MB

Start a container from that image:
$ docker run -d -p 8888:80 <TAG_NAME>

In this command, the -d flag detaches the process from the command line. The container will run in the background and
the terminal is free for subsequent input. -p <EXPOSED_PORT>:<INTERNAL_PORT> will open the port 8888 in the host system
and bind it to the port 80 of the system running inside the container. This is necessary for accessing webapps from
the outside world. Open ports are shown in the last column of the output of the ‘docker container ls –all’ command.
The rightmost column shows which external port is bound to which internal port. In the example above you can now
access your webapp at localhost:8888. The last command is the image tag name to start the container from.

Errors during image creation

Q: The command ‘/bin/sh -c dotnet publish -c Release -o out’ returned a non-zero code: 1
A: You have to install software during the installation by adding commands to the Dockerfile
https://stackoverflow.com/questions/49088768/dockerfile-returns-npm-not-found-on-build

Deploy to the Remote server

The idea is to prepare an image in your local development environment.
Then create a .tar file of that image and upload the .tar file to the remote system.
https://stackoverflow.com/questions/23935141/how-to-copy-docker-images-from-one-host-to-another-without-using-a-repository

1. Install docker on the remote server:

The assumption is that your remote server is running Ubuntu Linux.
https://docs.docker.com/install/linux/docker-ce/ubuntu/

ssh root@<yourip>
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
apt-cache madison docker-ce
EXAMPLE: sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
sudo apt-get install docker-ce=18.06.3~ce~3-0~ubuntu docker-ce-cli=18.06.3~ce~3-0~ubuntu containerd.io
sudo apt-get install docker-ce=5:19.03.1~3-0~ubuntu-xenial docker-ce-cli=5:19.03.1~3-0~ubuntu-xenial containerd.io
sudo docker run hello-world

2. On your local machine, build a Dockerfile and an image from that Dockerfile. The steps for dockerizing an application are explained above.

3. On your local machine, create a .tar file from the docker image, upload and import it on the remote machine.

You can list all images available on your local machine:
docker image ls

Then export one of the images to a file on your filesystem:

docker save -o <path for generated tar file> <image name>
docker save -o ./clone_tag_1_image.tar clone_tag_1

Now, zip the file to save time uploading it.

zip -r clone_tag_1_image.tar.gz clone_tag_1_image.tar

Upload the file to the server using scp:

scp <source> <destination>
scp clone_tag_1_image.tar.gz root@<your_ip>:/temp

Unzip the image on the server:

gunzip clone_tag_1_image.tar.gz

Import the image on the server:

docker load -i <path to image tar file>
docker load -i /temp/clone_tag_1_image.tar

Run the docker detached while exposing the port 80 t0 8888:

docker run -d -p 8888:80 clone_tag_1

The web app should now be available via the servers IP on the port 8888. If it is not, check the firewall settings of your server.

Example repository

cd /Users/<USER>/dev/dot_net_core
git clone https://github.com/dotnet/dotnet-docker.git


Example

https://github.com/dotnet/dotnet-docker/tree/master/samples/aspnetapp
$ docker run --name aspnetcore_sample --rm -it -p 8000:80 mcr.microsoft.com/dotnet/core/samples:aspnetapp

The app starts and is reachable on http://localhost:8000/


Cheatsheet

Version
$ docker version

General info
$ docker info

List all images downloaded to your local machine
$ docker image ls

List all running docker containers
$ docker container ls --all

Find all running docker containers and their IDs
$ docker ps -a

$ docker port <ContainerID>

Stopping a container
$ docker stop <ContainerID-Prefix>

Find info about a specific docker container
$ docker inspect <containerid>
$ docker inspect <containerid> | grep IPAddress

Build an image from a Dockerfile
$ docker build --tag=clone_tag_1 .

$ docker rm
$ docker container prune