NVIDIA Jetson Orin Nano

Getting Started with NVIDIA Jetson Orin Nano

The NVIDIA Jetson Orin Nano is a powerful single-board computer built with an NVIDIA Ampere GPU for performing a variety of parallel-operation tasks, like cryptocurrency mining and AI. In this guide, we will walk you through the process of flashing NVIDIA’s Ubuntu image to the Orin Nano Development Kit.

The Jetson Orin Nano Developer Kit Getting Started Guide defaults to flashing an SD card with the pre-configured Ubuntu image (similar to how you might configure a Raspberry Pi). This has 2 issues:

  1. The SD card is much slower and offers less space than an NVMe M.2 SSD. If you are working with large AI models, I highly recommend purchasing and mounting an SSD in one of the available M.2 slots under the dev kit.
  2. Jetpack 6.0+ (with the newest Ubuntu OS) contains updated QSPI drivers. If you try to flash the SD card, you will likely find that your Orin Nano simply boots to a blank screen due to the outdated drivers. As a result, you must use the NVIDIA SDK Manager from a host computer to flash the OS directly to the dev kit (i.e. over a USB cable) the first time. See this post for more information.

To ensure that you can flash to either SSD or SD card as well as have the most up-to-date drivers, I recommend using the SDK Manager to flash directly to the board. The rest of this guide will show you how to use the SDK Manager to flash Ubuntu to the Orin Nano dev kit.

Required Hardware

You will need the following hardware:

Insert the SD card on the underside of the Orin Nano (you can see the SD card slot between the Orin Nano card and carrier board) or attach the SSD to the underside of the dev kit carrier board. Connect keyboard, mouse, and monitor if desired. Note that the USB C connector (not the USB A ports) contains an Ethernet pass-through, which means you can SSH into the board via USB if you don’t want to connect other peripherals.

Install and Run Required Host Operating System

To flash the operating system (OS) onto the Orin Nano, you should use the NVIDIA SDK Manager running from a host computer. In my experience, you absolutely must use the supported host operating system to run the SDK Manager. See the supported OS chart on the SDK Manager page to figure out which host OS you need to use. For this guide, we will use the SDK Manager to install JetPack 6.0, which means we must use exactly Ubuntu 20.04 or Ubuntu 22.04 (not Linux Mint or any other derivative–it must be the official Ubuntu distro).

If you happen to be running either Ubuntu 20.04 or Ubuntu 22.04, great! If not, I recommend creating a bootable USB drive that you can use to try Ubuntu without installing it.

  1. Download ubuntu-22.04.4-desktop-amd64.iso from this page
  2. Follow these instructions to create a bootable USB drive with the Ubuntu image

Boot your host computer into Ubuntu 22.04 from the USB drive. Note that you may need to adjust the boot order or permissions in your BIOS.

Install NVIDIA SDK Manager

The SDK Manager is a program that allows you to install operating systems, libraries, and additional software onto a connected NVIDIA single-board computer.

Boot and log into your host OS (Ubuntu 22.04, if you followed the above instructions). Open Software & Updates and enable Community-maintained free and open-source software (universe) and Proprietary drivers for devices (restricted) options.

Open a browser and navigate to https://developer.nvidia.com/sdk-manager. Click the .deb Ubuntu link at the top-right of the page to download the latest SDK Manager package. Take a look at the filename of the .deb file you just downloaded. It will contain version and build numbers, which we need for the next step. For example, if the file is named sdkmanager_2.1.0-11669_amd64.deb, then the version is 2.1.0 and the build is 11669.

Open a terminal and install some dependencies:

sudo apt update
sudo apt install -y libcanberra-gtk-module

From there, install the .deb package (replace [version] and [build] with the version and build numbers in the actual filename):

sudo dpkg -i sdkmanager_[version]-[build]_amd64.deb

Connect Orin Nano

To flash the Orin Nano using the SDK Manager, it must first be put into “recovery mode.” To do that, attach a jumper or jumper wire between the FC_REC and GND pins (pins 2 and 3) on the underside of the Orin Nano card.

Connect a cable between the USB-C port on the dev kit and a USB port on your host computer. Plug the power adapter into the dev kit.

In a terminal on the host computer, enter the following command:

lsusb

You should see ID 0955:7523 NVIDIA Corp. APX as one of the items. This ID and name is important! If you do not see this ID/name, it means the board is not in recovery mode, not connected, or not powered. The SDK Manager looks for this exact name to find the board in recovery mode.

Flash OS With the SDK Manager

Enter the following command to run the SDK Manager:

sdkmanager

You should be presented with a new SDK Manager window. Click Login in to bring up a browser and log in to your NVIDIA account (create one if you do not already have one).

If you get a pop-up in the SDK Manager asking you to select a board, select Jetson Orin Nano [8GB developer kit version] and click OK.

Next, select the latest JetPack (6.0 for me), and select any additional SDKs, like DeepStream, if required (none for me).

Click Continue. Add additional components as needed. In theory, you should be able to download/install these components on the Jetson board once you have it up and running.

Note that the SDK Manager will download a LOT of data! You’ll need ~50 GB of free space to make this all happen:

  • If you’re running Ubuntu natively and have the available space, just leave the target locations as their defaults.
  • If you are running from an Ubuntu Live USB/CD and don’t have the space to download everything, mount additional storage and change the Download folder and Target HW image folder locations (as I have done in the image below). Note that you might need to re-mount the drives in read/write mode. You must use a partition formatted with ext4, as the installation script attempts to use Linux permissions when configuring files/executables.

Click Continue and wait. And wait. And wait. You’ll download a lot of data.

After the download is done, select any desired SDK components you might want for development (you can install them later from the Orin Nano if you wish). I left mine at the defaults. You will also be asked where you would like to flash the OS and SDK components:

  • Selected device: Jetson Orin Nano [8GB developer kit version]
  • OEM Configuration: Runtime (this will run System Configuration Wizard on first boot)
  • Storage Device: choose SD card or SSD (depending on what hardware you installed)

Note: If you chose Pre-config for OEM Configuration, you will be asked to provide username/password details to configure the Orin Nano OS. It will skip the System Configuration Wizard on first boot, which means you don’t need to attach a keyboard, mouse, and monitor to finish the installation process.

Click Flash. Wait some more. Eventually, the Orin Nano should boot, and you should be presented with the Ubuntu System Configuration Wizard

When finished with the configuration wizard, log in to Ubuntu. At this point, the USB-C port will expose an ethernet port (l4tbr0) with the default IP address of 192.168.55.1. That means you can ping or SSH into your Orin Nano over USB (e.g ssh [username]@192.168.55.1).

Shut down the Orin Nano and remove the power adapter. Remove the “Recovery Mode” jumper.

Run the Orin Nano

Attach the power adapter to the Orin Nano and wait for it to boot. You should be presented with a login screen. Enter your username and password to start using Ubuntu!

Troubleshooting

If the SDK Manager gives you any trouble (like making it so you can’t change the download folders), you can close the SDK Manager and delete the cache. Note that you will need to re-login once you start the sdkmanager again.

rm -rf ~/.nvsdkm/
rm -rf ~/Downloads/nvidia
sdkmanager

If the SDK fails on download/install, check the logs. If you see the error chroot: failed to run command 'dpkg': Exec format error it means that qemu is not installed or running correctly (see here for more info). Run the following commands and restart the SDK Manager.

sudo apt-get install qemu-user-static
sudo update-binfmts --import qemu-aarch64

Going Further: Local LLM

The Orin Nano contains a powerful (for its size) GPU, but it means carefully installing and configuring software to take advantage of that GPU. If you want to run a local LLM (think ChatGPT, but smaller), check out these great tutorials on the NVIDIA Jetson AI Lab site.

In a future post, I will show how to install Ollama and Piper-TTS on the Orin Nano to run my hopper-chat project for a complete voice assistant.

Install TensorFlow with GPU support

How to Install TensorFlow with GPU Support on Windows

This tutorial will show you how to install TensorFlow with GPU support on Windows. You will need an NVIDIA graphics card that supports CUDA, as TensorFlow still only officially supports CUDA (see here: https://www.tensorflow.org/install/gpu).

If you are on Linux or macOS, you can likely install a pre-made Docker image with GPU-supported TensorFlow. This makes life much easier. See here for details (this article is about a year old, so a few things might be out of date). However, for those of us on Windows, we need to do things the hard way, as there is no NVIDIA Docker support on Windows.

See this article if you would like to install TensorFlow on Windows without GPU support.

[Update February 13, 2022] Updated some screenshots and a few commands to ensure that everything works with TensorFlow 2.7.0.

Prerequisites

To start, you need to play the version tracking game. First, make sure your graphics card can support CUDA by finding it on this list: https://developer.nvidia.com/cuda-gpus.

For example, my laptop has a GeForce GTX 1060, which supports CUDA and Compute Capability 6.1.

You can find the model of your graphics card by clicking in the Windows search bar and entering “dxdiag.” This tool will identify your system’s hardware. The Display tab should list your graphics card (if present on your computer).

DirectX Diagnostic Tool

Then, we need to work backwards, as TensorFlow usually does not support the latest CUDA version (note that if you compile TensorFlow from source, you can likely enable support for the latest CUDA, but we won’t do that here). Take a look at this chart to view the required versions of CUDA and cuDNN.

At the time of writing (updated Feb 13, 2022), this is the most recent TensorFlow version and required software:

Version Python version Compiler Build tools cuDNN CUDA
tensorflow_gpu-2.7.0 3.7-3.9 MSVC 2019 Bazel 3.7.2 8.1 11.2

Take a note of the particular required software versions listed for the particular TensorFlow version you wish to use. While you could compile TensorFlow from source to support newer versions, it’s much easier to install the specific versions listed here so we can install TensorFlow using pip.

Install Microsoft Visual C++ Compiler

The CUDA Toolkit uses the Microsoft Visual C++ (MSVC) compiler. The easiest way to install it is through Microsoft Visual Studio.

Download and install Visual Studio Community (which is free) from this site: https://visualstudio.microsoft.com/vs/community/. Yes, it’s a full IDE that we won’t use–we just need the compiler that comes with it.

[Update Feb 13, 2022] Note: at this time, the CUDA Toolkit installer will not find the latest version of Visual Studio Community (2022). You will need to install the older 2019 version by downloading from here.

Run the installer. You will be asked to install workloads. Click on the Individual components tab. Search for “msvc 2019” and select the latest MSVC C++ 2019 build tools version for your computer. For me, that was MSVC v142 – VS 2019 C++ x64/x86 build tools (Latest).

Install MSVC for Visual Studio Community

Click Install. When asked about continuing installation without a workload, click Continue. After installation is complete, you do not need to sign into Visual Studio. Simply close out all of the installation windows.

Install CUDA Toolkit

Navigate to the CUDA Toolkit site. Note the CUDA version in the table above, as it’s likely not the latest CUDA release. So, you’ll need to click on Archive of Previous CUDA Releases. Download the CUDA Toolkit version that is required for the TensorFlow version you wish to install (see the table in the Prerequisites section). For me, that would be CUDA Toolkit 10.1 update2 (Feb 13, 2022 update: CUDA Toolkit 11.2.2).

CUDA toolkit archive versions

Download the installer for your operating system (which is probably Windows 10). I used the exe (network) installer so that it downloads only the required components.

Run the installer. It will take a few minutes to scan your system. Once scanning is done, accept the license agreement and select Custom (Advanced) install.

Deselect the components you don’t need. For example, we likely won’t be developing custom CUDA kernels, so deselect Nsight Compute and Nsight Systems. I don’t have a 3D monitor, so I’ll deselect 3D Vision. I’ll keep PhysX selected for gaming, but feel free to deselect it, as it’s not needed by TensorFlow. You can leave everything else selected.

NVIDIA CUDA Toolkit installer

Click Next. Leave the installation directories as default (if you wish) and click Next again to download and install all of the drivers and toolkit. This will take a few minutes. Close the installer when it finishes.

Install cuDNN

GPU-accelerated TensorFlow relies on NVIDIA cuDNN, which is a collection of libraries used to run neural networks with CUDA.

Head to https://developer.nvidia.com/rdp/cudnn-download. Create an NVIDIA Developer account (or login if you already have one). Ignore the cuDNN version listed in the TensorFlow version table (in the Prerequisites section). Instead, head to the cuDNN Archive and download the version that corresponds to the CUDA version you just installed.

For example, I just installed CUDA 11.2, so I’m going to download cuDNN v8.2.1 (which is the latest version that supports CUDA 11.x). Choose the cuDNN Library for your operating system (e.g. Windows).

Download NVIDIA cuDNN library

The next part is a bizarre and seemingly old-fashioned method of installing a library. The full instructions can be found on this NVIDIA page (see section 3: Installing cuDNN on Windows).

Unzip the downloaded archive. Navigate into the unzipped directory and copy the following files into the CUDA installation directory (* being any files found with the listed file extension, and vxx.x is the CUDA version you installed).

Copy <cuDNN directory>\cuda\bin\*.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vxx.x\bin

Install NVIDIA cuDNN dll files

Copy <cuDNN directory>\cuda\include\*.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vxx.x\include

Install NVIDIA cuDNN header files

Copy <cuDNN directory>\cuda\lib\x64\*.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vxx.x\lib\x64

Install NVIDIA cuDNN lib files

Next, we need to update our environment variables. Open Control Panel > System and Security > System > Advanced System Settings.

Edit system properties in Windows

Click Environment Variables at the bottom of the window.

CUDA update environment variables

In the new window and in the System variables pane, select the Path variable and click Edit in the System variables pane.

You should see two CUDA entries already listed.

System path CUDA entries

If you do not see these listed, add the following directories to this Path list (where vxx.x is your CUDA version number):

  • C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vxx.x\bin
  • C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vxx.x\libnvvp

Click OK on the three pop-up windows to close out of the System Properties.

Install TensorFlow

You can install TensorFlow any way you wish, but I highly recommend doing so through Anaconda. It makes installing Python, various packages, and managing environments much easier.

Head to anaconda.com. Download the Individual Edition for your operating system (Windows x64). Anaconda comes pre-packaged with Python set at a particular version, so you will need to note the version of Python required by TensorFlow in the Prerequisites section. For me, I need something between 3.7-3.9. As a result, Python 3.9 that comes with Anaconda (at the time of this Feb 13, 2022 update) works fine.

Run the Anaconda installer, accepting all the defaults.

When it’s done, run the Anaconda Prompt (anaconda3).

The Anaconda Prompt

In the terminal, we’ll create a new Python environment, which will help us keep this version of TensorFlow separate from the non-GPU version. First, update conda with the following:

conda update -n base -c defaults conda

Then, enter the following commands to create a virtual environment:

conda create --name tensorflow-gpu
conda activate tensorflow-gpu

Install a version of Python supported by TensorFlow-GPU (as given by the table in the Prerequisites section) for your virtual environment (I’ll use Python version 3.9).

conda install python=3.9

Enter the following command to make sure that you are working with the version of Python you expect:

python --version

If you wish to also install Jupyter Notebook, you can do that with:

conda install jupyter

Rather than let pip try to figure out which version of TensorFlow you want (it will likely be wrong), I recommend finding the exact .whl file from TensorFlow’s site. Head to the TensorFlow Pip Installer page and look at the Package Location list.

Look under the Windows section for the wheel file installer that supports GPU and your version of Python. For me, this will be the wheel file listed with Python 3.9 GPU support. Note that GPU support (_gpu), TensorFlow version (-2.2.0), and supported Python version (-cp37) are listed in the filename. Highlight and copy the URL with the .whl file you want.

Download the correct TensorFlow wheel file

In Anaconda, enter the following command, replacing <wheel_url> with the URL that you copied in the previous step (i.e. paste it in).

python -m pip install <wheel_url>

Press ‘enter’ and let this run. It will take a few minutes.

Installing TensorFlow GPU wheel in Anaconda

When that’s done, go into the Python command line interface:

python

From there, enter the following commands (one at a time):

import tensorflow as tf 
print(tf.test.is_built_with_cuda()) 
print(tf.config.list_physical_devices('GPU'))

These will tell you if TensorFlow is capable of running on your graphics card. The first line imports TensorFlow, the second line makes sure it can work with CUDA (it should output “True”), and the third line should list the GPUs available to TensorFlow.

Test TensorFlow for GPU support in Python

Note that if you see any weird errors about missing Windows DLL files (whether in the Anaconda prompt, within Python, or in Jupyter Notebook), try the following from within Anaconda:

python -m pip install pypiwin32

Close the Anaconda prompt.

Running TensorFlow

When you’re ready to do some machine learning stuff, open the Anaconda Prompt and enter the following:

conda activate tensorflow-gpu

From there, you can use Python in Anaconda or start a Jupyter Notebook session (see here for a good overview of how to work with Jupyter Notebook):

jupyter notebook

If you wish to install a new Python package, like matplotlib, you can enter the following into the Anaconda Prompt (make sure you are in your environment, tensorflow-gpu, and exit Jupyter Notebook by pressing ctrl+c):

python -m pip install <name_of_package>

Alternatively, you can install a package from within Jupyter Notebook by running the following command in a cell:

!python -m pip install <name_of_package>

For example, here is how I installed matplotlib:

Install Python package from within Jupyter Notebook

Some libraries, like OpenCV, require you to install system components or dependencies outside of the Python environment, which means you can’t simply use pip. If so, check to see if the package is available in the conda-forge. If it is, you can install it in your Anaconda environment with:

conda install -c conda-forge <name_of_package>

Going Further

I hope this helps you get started using TensorFlow on your GPU! Thanks to Anaconda, you can install non-GPU TensorFlow in another environment and switch between them with the conda activate command. If the GPU version starts giving you problems, simply switch to the CPU version.

See the following videos if you are looking to get started with TensorFlow and TensorFlow Lite: