Introduction

This chapter describes how to build an image using an Ameba Yocto Linux SDK. It describes the Realtek Ameba release layer and Ameba-specific usage.

The Yocto Project is an open-source collaboration focused on embedded Linux OS development. For more information on Yocto Project, refer to the Yocto Project.

Several documents on the Yocto Project home page describe in detail how to use the system. To use the basic Yocto Project without the Realtek Ameba release layer, follow the instructions in the Yocto Project Quick Start can be found at Yocto Project Quick Build.

Files used to build an image are stored in layers. Layers contain different types of customizations and come from different sources. Some of the files in a layer are called recipes. Yocto Project recipes contain the mechanism to retrieve source code, build and package a component.

The following lists show the layers used in this release.

  • Ameba release layer:

    • meta-realtek

      • meta-realtek-bsp: Realtek BSP layer

      • meta-sdk: Realtek specific SDK customization

  • Yocto Project community layers:

    • poky

Host Setup

To build Ameba Yocto Linux SDK, you must use Linux. Building under either MacOS or Windows is not supported currently. You need install essential host packages on your build host. The following commands install the host packages based on an Ubuntu distribution.

  • Basic Utility - Host packages for SDK build.

    $sudo apt install chrpath diffstat lz4 liblz4-tool zstd mtd-utils python3 python3-dev python3-pip python3-pyelftools
    $sudo pip install python-mbedtls sslcrypto cryptography pycryptodome ecdsa
    

    Alternatively, enter the tools installation directory of the SDK, <sdk>/sources/yocto/meta-realtek/tools, and execute the installation script located there.

    $cd <sdk>/sources/yocto/meta-realtek/tools
    $sudo ./install.sh
    
  • Change shell to bash

    $sudo dpkg-reconfigure dash
    

Toolchain Setup

For Linux

This section introduces the steps how to export yocto toolchain, when users need to use it to build codes.

  1. Choose product rtl8730elh-va7 for example.

    source envsetup.sh -m rtl8730elh-va7 -d ameba-generic
    
  2. Execute the bitbake command for meta-toolchain.

    bitbake meta-toolchain
    
  3. Enter the sdk directory.

    cd build_rtl8730elh-va7_ameba-generic/tmp/deploy/sdk
    
  4. Execute the shell script file of producing toolchain.

    ./ameba-generic-glibc-x86_64-meta-toolchain-cortexa32hf-neon-rtl8730elh-va7-toolchain-4.0.7.sh -d <toolchain>
    

    Where the <toolchain> is your customized directory to store the target toolchain.

  5. Select Y.

    The toolchain will be produced at the directory <toolchain>.

  6. Enter the <toolchain>.

    source environment-setup-cortexa32hf-neon-rtk-linux-gnueabi
    

    Then the toolchain is produced.

For Windows

This section introduces the steps to prepare the toolchain environment.

  1. Acquire the zip files of toolchain from Realtek.

  1. Create a new directory rtk-toolchain under the path {MSYS2_path}\opt.

    Where the {MSYS2_path} is your MSYS2 installation path.

  2. Unzip asdk-10.3.x-mingw32-newlib-build-xxxx.zip and vsdk-10.3.x-mingw32-newlib-build-xxxx.zip, and place the toolchain folders (asdk-10.3.x and vsdk-10.3.x) to the folder rtk-toolchain created in step 2.

Note

  • The unzip folders should stay the same with the step above and do NOT change them, otherwise you need to modify the toolchain directory in makefile to customize the path.

  • If an error of the toolchain, just like the log “Error: No Toolchain in /opt/rtk-toolchain/vsdk-10.3.1/mingw32/newlib” appears when building the project, find out if your toolchain files directory are not the same with the directory in the log. Place the toolchain files correctly and try again.

Image Build

This section provides the detailed information along with the process for building an image.

Setup the Environment

Ameba Yocto Linux SDK provides a script, envsetup.sh, that simplifies the setup for Ameba machines. To use the script, the name of the specific machine to be built for needs to be specified as well as other parameters, such as distribution, build directory etc. The script sets up a directory and the configuration files for the specified machine.

The syntax for the envsetup.sh script is shown below:

$source envsetup.sh [-h] [-m machine] [-d distro] [-b build dir] [-j jobs] [-t tasks]

Options

Usage

-h

Display help of the scripts.

-m <machine>

Set the target machine to be built.

-d <distro>

Set the distribution to be built.

-b <build dir>

Set the non-default path of project build folder.

-j <jobs>

Set the number of jobs for make to spawn during the compilation stage.

-t <tasks>

Set number of BitBake tasks that can be issued in parallel.

-p <download dir>

Set the non-default path of DL_DIR.

-c <sstate dir>

Set the non-default path of SSTATE_DIR.

After the script runs, the working directory will be created by the script, specified by the -b option or a default name is used if no -b option specified. A conf folder is created containing the files bblayers.conf and local.conf.

  • The <builddir>/conf/bblayers.conf file contains all the meta layers used in the Ameba Yocto Linux SDK release.

  • The local.conf contains the machine and distro specifications:

    MACHINE ??= "rtl8730elh-va8"
    DISTRO ?= "ameba-generic"
    

The MACHINE and DISTRO configuration can be changed by editing this file, if necessary.

In the meta-realtek layer, the following Ameba machine and distribution configuration that can be selected. Check either the release notes or the machine and distro directory for the latest additions.

Machine

Target

Configuration path

rtl8730eah-va6

Machine configuration for RTL8730EAH-VA6 chip.

sources/yocto/meta-realtek/meta-realtek-bsp/conf/machine/rtl8730eah-va6.conf

rtl8730elh-va7

Machine configuration for RTL8730ELH-VA7 chip.

sources/yocto/meta-realtek/meta-realtek-bsp/conf/machine/rtl8730elh-va7.conf

rtl8730elh-va8

Machine configuration for RTL8730ELH-VA8 chip.

sources/yocto/meta-realtek/meta-realtek-bsp/conf/machine/rtl8730elh-va8.conf

rtl8730elh-recovery

Machine configuration to build recovery image.

sources/yocto/meta-realtek/meta-realtek-bsp/conf/machine/rtl8730elh-recovery.conf

Distribution

Target

Configuration path

ameba-generic

A distribution contains basic functions.

sources/yocto/meta-realtek/meta-sdk/conf/distro/ameba-generic.conf

ameba-full

A distribution contains full demo functions.

sources/yocto/meta-realtek/meta-sdk/conf/distro/ameba-full.conf

Note

The RTL8730EAH-VA6 is for NOR flash.

Accelerate Building

Before building, users can download the compilation acceleration package in advance, which usually includes downloads and sstate-cache. Please contact Realtek to get this package. Assume they have been stored in the path <packages> after downloading. After executing envsetup.sh, create symbolic links for them in the specified build-<machine> directory. This way, amount of download time can be saved during building.

cd build-<machine>
ln -s <packages>/downloads .
ln -s <packages>/sstate-cache .

If users have built sdk successfully before, they can also save the packages that were downloaded during the previous building separately, and then create symbolic links pointing to them during the new sdk compilation. Assume that the previous building directory is build-<machine>-1, the current building directory is build-<machine>-2.

cd build-<machine>-2
ln -s build-<machine>-1/downloads .
ln -s build-<machine>-1/sstate-cache .

Build Image

The SDK provides two ways to build images for Ameba machines: quick command and manual build command.

Quick command is a convenient command added by script envsetup.sh. After source of envsetup.sh, the following functions are added to the shell environment.

Command

Description

m

Build all linux images for Ameba machine, including boot.img, kernel.img, rootfs.img, userdata.img and dtb.

m clean

Clean all the build output

mfw

Build firmware boot and images.

mfw mp

Build mp-firmware boot and images.

mfw clean

Clean output files of firmware

mfw menuconfig

Configure the build parameters for firmware

mrecovery

Build linux recovery images.

mkernel

Build kernel images.

To build all images to flash, run the following command:

$source envsetup.sh -m rtl8730elh-va8
$m
$mfw

The Yocto Project build uses the bitbake command. For example, bitbake <component> builds the named component. Each component build has multiple tasks, such as fetching, configuration, compilation, packaging, and deploying to the target rootfs. The bitbake image build gathers all the components required by the image and build in order of the dependency per task.

The following command is an example on how to build an image:

$bitbake ameba-image-core

In the meta-realtek layer, the following images recipes can be selected to build.

Product name

Description

ameba-image-core

A core image contains basic functions.

ameba-image-userdata

An image that used as user’s data file system.

ameba-image-recovery

A small image for recovery system.

Bitbake Options

The bitbake command used to build an image is bitbake <image name>. Additional parameters can be used for specific activities described below. The Bitbake provides various useful options for developing a single component.

bitbake <parameter> <component>

Where <component> is a desired build package.

The following table provides some Bitbake options.

Bitbake parameter

Description

-c fetch

Fetches if the downloads state is not marked as done.

-c cleanall

Cleans the entire component build directory. All the changes in the build directory are lost. The rootfs and state of the component are also cleared. The component is also removed from the download directory.

-c deploy

Deploys an image or component to the rootfs.

-c compile -f

It is not recommended that the source code under the temporary directory is changed directly, but if it is, the Yocto Project might not rebuild it unless this option is used.

Use this option to force a recompile after the image is deployed.

-c menuconfig

Execute make menuconfig for specific component if component supports.

-e <component>

Show the build environment for the component.

-g

Lists a dependency tree for an image or component.

-k

Continues building components even if a build break occurs.

-DDD

Turns on debug 3 levels deep. Each D adds another level of debug.

Image Deployment

The final images to be programmed are deployed at <build directory>/tmp/deploy/images.

The image is specific to the machine set in the environment setup. After build successfully, the following Linux images are generated.

  • boot.img

  • kernel.img

  • rootfs.img

  • userdata.img

  • linux device tree images

These device tree images have their own names. For example, the name of image for machine rtl8730elh-va7-generic is rtl8730elh-va7-generic.dtb.

The firmware boot image and application images are generated with mfw quick command:

  • km4_boot_all.bin

  • km0_km4_app.bin

Download Images to Flash

To download images to Flash:

Docker Container

Docker is an open-source platform designed to automate the deployment, scaling, and management of applications in containers. Containers are lightweight, portable, and self-sufficient units that include everything needed to run a piece of software, such as code, runtime, libraries, and system tools.

  • Docker utilizes containerization to package an application and its dependencies into a single container. This ensures consistent behavior across different environments, such as development, testing, and production.

  • Containers can run virtually anywhere, on a developer’s machine, on physical or virtual machines in a data center, or in the cloud—without alteration.

  • Containers share the host system’s kernel, making them more lightweight than traditional virtual machines, which include an entire operating system per instance.

Image

Build Image

Before you need to build a new docker image, you should install docker on your computer. The procedure to install docker is refer to Docker HomePage.

You can run docker at Windows, Linux, MacOS system, etc. We introduce the procedure on Linux here, the procedure at other operation systems is same as Linux.

At first, there should be a file named Dockerfile at your work directory.

The content of Dockerfile is as below for example:

# Set the base system.
FROM ubuntu:20.04

# Set environment variables.
ENV DEBIAN_FRONTEND=noninteractive

# Update.
RUN apt-get update

# Install some tools.
RUN apt-get install -y sudo
RUN apt-get install -y make
RUN apt-get install -y make-guile
RUN apt-get install -y build-essential
RUN apt-get install -y gcc
RUN apt-get install -y g++
# Any other tools you need to install # ……

# Add toolchain.
ADD rtk-toolchain.tar.gz /opt

# Create a normal user named "ameba", with password "ameba".
RUN useradd -m -s /bin/bash ameba && echo "ameba:ameba" | chpasswd && adduser ameba sudo
RUN echo "ameba ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
USER ameba

# Set work directory.
WORKDIR /home/ameba

# RUN shell command.
CMD ["/bin/bash"]

Then, you can execute docker build command to build your image.

sudo docker build -t <image_name>:<image_tag> .

Where the <image_name> means the name of docker image, <image_tag> is optional, which signs the different revision of docker image, if absent, it is latest as default.

After building successfully, you can use docker images command to check whether this image is on your host.

sudo docker images

You can add parameter -a to show all the images including hidden mid-layer images.

sudo docker images -a

Then all the images including their names, tags, and IDs will be shown.

You can add new name or tag to an image.

sudo docker tag <image_id> <new_image_name>:<new_image_tag>

Load Image

We provide ready-to-use Docker images for customers. Please contact Realtek to obtain the base image version for compiling the SDK.

For example, when you got the image named rtk_ameba_sdk_build_v1.1.tar, you can load this on your host with command docker load.

sudo docker load -i rtk_ameba_sdk_build_v1.1.tar

Then you can use docker images command to check whether this image is on your host.

sudo docker images

If the image is no longer needed, you can delete it by name and tag.

sudo docker rmi <image_name>:<image_tag>

One <image_id> may correspond to one or more <image_name>:<image_tag>. When you tag one <image_id> with more than one <image_name>:<image_tag>, the command above will not delete this <image_id>, because there is other <image_name>:<image_tag>.

You can delete docker image by ID, in this case, all the <image_name>:<image_tag> correspond to this <image_id> should be deleted at same time.

sudo docker rmi <image_id>

Container

A Docker container is an instance of a Docker image. It is a executable unit of software that packages up the application along with everything it needs to run consistently in any environment.

  • Containers run in isolated environments, utilizing OS-level virtualization to share the host system’s kernel, ensuring minimal overhead compared to traditional virtual machines.

  • By default, containers are stateless and temporary, meaning any changes made during execution are not saved after they stop unless explicitly configured.

Run Container

After building a new docker image, or loading a ready-to-use docker image, you can run a new container based on this image’s name.

sudo docker run --name <container_name> -it <image_name>:<image_tag>

Or, base on this image’s ID.

sudo docker run --name <container_name> -it <image_id>

Where the <container_name> should be different from the names of any existing containers, if absent, the docker will automatically assign a unique name to the container.

And, if you want to mount your directory on host for this container, you can add parameter -v.

sudo docker run --name <container_name> -v <host_dir>:<container_dir> -it <image_name>:<image_tag>
sudo docker run --name <container_name> -v <host_dir>:<container_dir> -it <image_id>

Where <host_dir> is a directory on your host, which should exist. The <container_dir> is a directory on your container, if it does not exist, the container can create it automatically.

After mounting the directory successfully, user can operate this directory at host and container at same time.

In container, you can run exit to back to your host.

exit

Then you can use docker ps command to check whether this container is running.

sudo docker ps

If you want to see all the containers including all ones running and not running, you can add parameter -a.

sudo docker ps -a

If this container is no longer needed, you can delete it by <container_name> or <container_id>.

sudo docker rm <container_name>
sudo docker rm <container_id>

Execute existing Container

If you want to execute a container not running, you should start it at first, by container name or ID.

sudo docker start <container_name>
sudo docker start <container_id>

Then, run docker exec command to get into container, by container name or ID.

sudo docker exec -it <container_name> /bin/bash
sudo docker exec -it <container_id> /bin/bash

After exiting the container, you can stop this container by name or ID.

sudo docker stop <container_name>
sudo docker stop <container_id>

In addition, you can also directly use the following command with parameter -ia to start an existing container:

sudo docker start -ia <container_name>
sudo docker start -ia <container_id>

Then run exit to exit container directly, the container will be stopped at the same time.

exit

Build SDK at Container

For the images release by Realtek, the environment has been setup already, you can build the sdk directly.

If you mount the directory of the host on container, you can put the sdk into the directory, then it can be seen at the container at same time, you can build the code directly.

If you does not mount the directory of the host, you can use docker cp command to copy files or directory between host and container.

Copy files or directory from host to container.

sudo docker cp <host_directory>/<files_or_directory> <container_name>:<container_directory>
sudo docker cp <host_directory>/<files_or_directory> <container_id>:<container_directory>

After you building sdk successfully, you can copy files or directory from container to host.

sudo docker cp <container_name>:<container_directory_or_files> <host_directory>
sudo docker cp <container_id>:<container_directory_or_files> <host_directory>

You can also get the sdk from github at container, which has the same procedure at host.

Use Podman Instead

The podman can also deal with the images and containers as well as docker. Users can get podman from Podman HostPage.

The command format for Podman and Docker is almost the same, you can simply replace the docker prefix with podman to use it directly. For example, you can use podman build instead of docker build, podman images instead of docker images, podman ps instead of docker ps, etc. All the procedures of dealing with images and containers are same for podman and docker.