A debug session

Once you have an Open SoC Debug daemon running and it connected to a target system, you can initiate a debug session. In the following you will learn the different basic features to interact with the system.

Before you begin, you should build the example programs:

make -C $TOP/fpga/bare_metal/examples hello.riscv trace.riscv

Resetting the system

In the first step we will start the command line interface and reset the system. When starting the command line interface it automatically connects to the daemon.

The steps to connect to the verilated RTL simulation are based on what you learned before:

Now you are connected to the debug system and can inspect and control the system. First, reset the system (osd> is the command line prompt in the following) and leave the command line again:

osd> reset
osd> quit

The command has reset the entire system. If you run the simulation with vcd output (add +vcd to the parameters) you can see how it is reset. reset asserts both reset signals for CPU and the rest of the SoC. With the parameter -halt, the CPU reset remains asserted to allow loading programs to memory or configuring debug modules. Running start de-asserts the CPU reset then:

osd> reset -halt
osd> [.. some other commands ..]
osd> start

So now we want to see that in action.

Capturing UART output

As described before, the UART controller has been replaced with an emulation module that has the same physical interface, but transports the characters over the debug infrastructure.

In the command line you can start a terminal that displays the output by executing terminal <module id>. You need to set the module id as enumerated (here: 2). This will open an xterm and direct output to it.

So when running the following sequence with the terminal open, you will see that the cores start and emit the printf message after the third command and after the fourth again:

osd> reset -halt
osd> terminal 2
osd> start
osd> reset

Now you can run the programs as before and reset the system without touching a board, but the real power comes with the other modules.

Generate a function trace

To analyze the execution of the program on the core, you can use the core trace module. It can be configured to log all function calls and returns from functions to a file. It also logs changes of the execution mode.

A function trace is generated by activating the core trace module and giving the ELF currently executed on the core.

osd> reset -halt
osd> terminal 2
osd> ctm log ctm.log 4 path/to/hello.riscv
osd> start
osd> quit

Now you can inspect the output file ctm.log:

00033c7c enter init_tls
00033cac enter memcpy
00033d48 enter memset
00033dd1 enter thread_entry
00033e41 enter main
00033e67 enter uart_init
00033f21 enter printf
00033f9e enter vprintfmt
0003432e enter syscall
000343a5 change mode to 3
0003444c enter handle_trap
00034616 enter handle_frontend_syscall
0003469b enter uart_send
000346f4 enter uart_send
00034723 enter uart_send
00034752 enter uart_send
00034781 enter uart_send
000347b0 enter uart_send
000347df enter uart_send
0003480e enter uart_send
0003483d enter uart_send
0003486c enter uart_send
0003489b enter uart_send
000348ca enter uart_send
000348f9 enter uart_send
00034b9f enter exit
00034bcd enter syscall
00034bf4 change mode to 3
00034c4f enter handle_trap
00034c92 enter tohost_exit
Overflow, missed 12 events
Overflow, missed 13 events
Overflow, missed 13 events
Overflow, missed 13 events
Overflow, missed 13 events
Overflow, missed 13 events
Overflow, missed 14 events
Overflow, missed 14 events
Overflow, missed 3 events
Overflow, missed 3 events
Overflow, missed 3 events
Overflow, missed 4 events
Overflow, missed 3 events
Overflow, missed 3 events
Overflow, missed 3 events
Overflow, missed 3 events
Overflow, missed 4 events
Overflow, missed 4 events
Overflow, missed 4 events
[..]

Minimally-invasive software debugging

As introduced, lowRISC adds a software trace debugging technique that is minimally invasive by only adding a few cycles each time it is called without stalling the system.

The code trace.c from the examples emits four of such events:

#include <stdint.h>

#define STM_TRACE(id, value) \
  { \
    asm volatile ("mv   a0,%0": :"r" ((uint64_t)value) : "a0"); \
    asm volatile ("csrw 0x8f0, %0" :: "r"(id)); \
  }

static void trace_event0() {
  STM_TRACE(0, 0x42);
}

static void trace_event1(uint64_t value) {
  STM_TRACE(1, value);
}

static void trace_event2(uint64_t id, uint64_t value) {
  STM_TRACE(id, value);
}

int main() {
  STM_TRACE(0x1234, 0xdeadbeef);

  trace_event0();
  trace_event1(23);
  trace_event2(0xabcd, 0x0123456789abcdef);
}

You need to load it with the simulation:

$TOP/vsim/DebugConfig-sim +waitdebug +load=$TOP/fpga/bare_metal/examples/trace.riscv

Then open the CLI and run:

osd> reset -halt
osd> stm log stm.log 5
osd> start
osd> quit

You need to wait a few seconds between start and quit. This will generate in stm.log:

0002574e 1234 00000000deadbeef
00025784 0000 0000000000000042
0002579c 0001 0000000000000017
000257d2 abcd 0123456789abcdef

Scripting the command line interface

You can execute commands at startup of the CLI by creating a script, e.g. this startup.script:

reset -halt
terminal 2

By calling the CLI with this as a startup script the commands are executed before you enter the shell:

osd-cli -s startup.script
execute: reset -halt
execute: terminal 2
osd> 

Similarly, you can run CLI in batch mode, e.g., with run.script:

reset -halt
stm log stm.log 5
ctm log ctm.log 4 path/to/hello.riscv
start
wait 10
quit

Calling the CLI in batch mode will run the demos from above and wait for 10 seconds before quitting:

osd-cli -b run.script
execute: reset -halt
execute: stm log stm.log 5
execute: ctm log ctm.log 4 
execute: start
execute: wait 10
execute: quit

Loading the ELF to memory

Until now we have started the simulation with the ELF program pre-loaded into the memory. We can also load the ELF into the memory in the CLI:

osd-cli
osd> mem loadelf path/to/hello.riscv 3
osd> terminal 2
osd> start

Python scripting

To make the debugging really convenient we have python bindings, so that you can run automated debug sessions with re-usable scripts. An example script is given as $TOP/tools/share/opensocdebug/examples/runelf.py:

import opensocdebug 
import sys

if len(sys.argv) < 2:
    print "Usage: runelf.py <filename>"
    exit(1)

elffile = sys.argv[1]

osd = opensocdebug.Session()

osd.reset(halt=True)

for m in osd.get_modules("STM"):
    m.log("stm{:03x}.log".format(m.get_id()))

for m in osd.get_modules("CTM"):
    m.log("ctm{:03x}.log".format(m.get_id()), elffile)

for m in osd.get_modules("MAM"):
    m.loadelf(elffile)

osd.start()
osd.wait(10)

You can see that it does pretty much what we did here in the tutorial. However it is more flexible as it works with other systems too and is more powerful when integrated with your testing environment.

To try this example, just run:

python $TOP/tools/share/opensocdebug/examples/runelf.py path/to/hello.riscv