Skip to main content

Use instance by type

Use GPU instance

Install the appropriate driver to use a GPU type instance. Create an instance using a GPU-specific OS image with drivers installed, or create an instance with a default image and then download and install the public drivers separately. Here's how to use GPU by installing GPU driver for each operating system.

info

This guide is based on the Ubuntu 20.04 image currently provided by KakaoCloud, NVIDIA A100.

Step 1. Install NVIDIA driver

Install NVIDIA drivers. Recommended drivers and CUDA versions are:

GPU typeNVIDIA versionCUDA version
NVIDIA A100450.80.02 and aboveCUDA Toolkit 11.1 or higher
info
  1. Run the command to check whether there is an NVIDIA device on the instance.

    NVIDIA device search command
     $ lspci | grep -i NVIDIA
  2. Check which driver versions can be installed.

    • If the installed driver is not the latest version, execute the update by executing the apt update -y command.

    • When the message Command ‘ubuntu-drivers’ not found appears, enter the sudo apt install ubuntu-drivers-common command to install ubuntu-drivers-common.

      Command to check driver version to install
      $ ubuntu-drivers devices
      Example of checking driver version
      $ ubuntu-drivers devices
      **==** /sys/devices/pci0000:00/0000:00:04.0 **==**
      modalias: pci:v000010DEd000020B0sv000010DEsd0000134Fbc03sc02i00
      vendor: NVIDIA Corporation
      driver: nvidia-driver-515-server - distro non-free
      driver: nvidia-driver-470 - distro non-free
      driver: nvidia-driver-470-server - distro non-free
      driver: nvidia-driver-510-server - distro non-free
      driver: nvidia-driver-510 - distro non-free
      driver: nvidia-driver-450-server - distro non-free
      driver: nvidia-driver-515 - distro non-free recommended
      driver: xserver-xorg-video-nouveau - distro free builtin
  3. Select an installable driver and proceed with installation.

    Installing Driver
    $ sudo apt install nvidia-driver-470

    Image Installing drivers

  4. Reboot.

    Reboot command
    $ sudo reboot
  5. Check the installed driver information.

    Command to check installed driver information
    $ nvidia-smi
    Example of checking installed driver information
    $ nvidia-smi
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
    |-------------------------------+----------------------+-----------------------|
    | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
    | | | MIG M. |
    |===============================+======================+=======================|
    | 0 NVIDIA A100 80G... Off | 00000000:00:05.0 Off | 0 |
    | N/A 33C P0 41W / 300W | 35MiB / 80994MiB | 0% Default |
    | | | Disabled |
    |-------------------------------+----------------------+-----------------------|
    | 1 NVIDIA A100 80G... Off | 00000000:00:06.0 Off | 0 |
    | N/A 34C P0 43W / 300W | 35MiB / 80994MiB | 0% Default |
    | | | Disabled |
    +-------------------------------+----------------------+-----------------------+

Step 2. Install NVIDIA CUDA toolkit

Install NVIDIA CUDA Toolkit.

Delete Toolkit when reinstalling CUDA Toolkit

When reinstalling the CUDA Toolkit, prepare the following installation environment.

  1. Delete existing CUDA-related settings.

    CUDA configuration deletion command
    $ sudo rm -rf /usr/local/cuda*
  2. If there are any of the following existing settings in ~/.bashrc or /etc/profile, delete them.

    Delete existing settings
    export PATH=$PATH:/usr/local/cuda-11.4/bin
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.4/lib64
    export CUDADIR=/usr/local/cuda-11.4
  3. After deleting all existing settings, run the nvcc -V command to confirm that the command is not executed.

    nvcc -V command execution result
    $ nvcc -V
    Command 'nvcc' not found, but can be installed with: sudo apt install nvidia-cuda-toolkit

Install CUDA Toolkit

  1. Select the CUDA Toolkit version to install from NVIDIA Official Site > CUDA Toolkit Archive. When selecting a version, find the Base Installer command at the bottom.

    Image Select CUDA Toolkit version and check Base Installer command

  2. Run the Base Installer command (first line) to download the CUDA Toolkit installation file.

    Example of file download command for CUDA Toolkit installation
    $ wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda_11.4.0_470.42.01_linux.run

    Result of executing file download command for Base Installer's Toolkit installation Result of executing file download command for Toolkit installation of Base Installer

  3. Execute the Base Installer command (second line) to run the CUDA Toolkit installation file.

    • It takes more than 1 minute to run the CUDA Toolkit installation file.
    File execution command for CUDA Toolkit installation
    $ sudo sh cuda_11.4.0_470.42.01_linux.run
  4. Press the arrow keys to select Continue and press Enter.

    Image Select Continue

  5. Type accept and press Enter.

    Image Enter accept

  6. Press Space to uncheck Driver, select Install, and press Enter.

    • If there is an existing configuration, the message Existing installation of CUDA Toolkit 11.x found will appear. If applicable, select Upgrade all and press Enter.

    Image Select Install

  7. If the UDA Toolkit has been installed properly, find the following screen.

    Image CUDA Toolkit installation complete

  8. Run the following command to add CUDA Toolkit-related environment variables.

    Command for adding environment variables related to CUDA Toolkit
    $ sudo sh -c "echo 'export PATH=$PATH:/usr/local/cuda-11.4/bin' >> /etc/profile"
    $ sudo sh -c "echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.4/lib64' >> /etc/profile"
    $ sudo sh -c "echo 'export CUDADIR=/usr/local/cuda-11.4' >> /etc/profile"
    $ source /etc/profile
  9. Run the nvcc -V command to check the installed CUDA Toolkit.

    Image CUDA Toolkit installation confirmation result

Use NPU instance

Install the appropriate driver to use an NPU type instance. Create an instance using an NPU-specific OS image with drivers installed, or create an instance with a default image and then download and install the driver separately. Here's how to install NPU drivers on Linux operating systems:

info
  • This guide is based on FuriosaAI WARBOY, the Ubuntu 20.04 image currently provided by KakaoCloud.
  • APT server settings provided by FuriosaAI are required, and detailed instructions can be found in [FuriosaAI Docs > Driver, Firmware, Runtime Installation Guide](https://furiosa-ai.github.io/docs/latest/ko/software/installation. html).
  • For technical inquiries about FuriosaAI NPU driver installation and configuration, please contact FuriosaAI Customer Center to receive technical support. there is.

Step 1. Install FuriosaAI driver

Install the FuriosaAI NPU driver. Recommended driver and SDK versions are:

NPU typeDriver versionSDK version
FuriosaAI Warboy1.7 or more     0.9.1
  1. Run the command to check whether the instance where you want to install the FuriosaAI NPU driver has FuriosaAI equipment.

    FuriosaAI device search command
     $ lspci -nn | grep 1200
    16:00.0 Processing accelerators [1200]: Device [1ed2:0000] (rev 01)

    or

    FuriosaAI device search command
     $ sudo update-pciids
    $ lspci | grep FuriosaAI
    16:00.0 Processing accelerators: FuriosaAI, Inc. Warboy (rev 01)
  2. Proceed with driver installation.

    Driver installation command
     $ sudo apt install furiosa-driver-warboy
  3. Proceed with installing runtime libraries and utilities.

    Runtime library and utility installation command
     $ sudo apt install furiosa-libnux furiosa-toolkit
  4. Fix the versions of installed packages.

    Package version fixation command
     $ sudo apt-mark hold furiosa-driver-warboy furiosa-libhal-warboy furiosa-libcompiler furiosa-libnux furiosa-toolkit libonnxruntime
  5. Check the installed driver information.

    Command to check installed driver information
     $ furiosactl info
    Example of checking installed driver information
     $ furiosactl info
    +------+--------+----------------+-------+--------+--------------+
    | NPU | Name | Firmware | Temp. | Power | PCI-BDF |
    +------+--------+----------------+-------+--------+--------------+
    | npu0 | warboy | 1.6.0, c1bebfd | 52°C | 2.52 W | 0000:00:05.0 |
    +------+--------+----------------+-------+--------+--------------+

Step 2. Install FuriosaAI Python SDK

Install the FuriosaAI Python SDK.
Install a virtual environment (venv, pyenv, pipenv, conda, etc.) to configure an effective Python execution environment. In this guide, we configure a virtual environment by installing Miniconda to create an independent Python execution environment.

info
  1. Install Miniconda.

    Miniconda installation command
     $ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    $ SHELL=bash sh ./Miniconda3-latest-Linux-x86_64.sh
    $ source ~/.bashrc
    (base)$ conda --version
  2. Create and activate an independent Python execution environment.

    Python execution environment creation command
     (base)$ conda create -n my-env python=3.9
    (base)$ conda activate my-env
    (my-env)$ python --version
    Python 3.9.16
  3. Install Furiosa SDK.

    Furiosa SDK installation command
     (my-env)$ pip install --upgrade pip setuptools wheel
    (my-env)$ pip install 'furiosa-sdk[full]'
  4. Check the installed SDK.

    Furiosa SDK installation confirmation command
     (my-env)$ furiosa --version
    Furiosa SDK installation confirmation example
     (my-env)$ furiosa --version
    0.9.1-release (rev: a240782)

Step 3. Run the FuriosaAI Python SDK example

Run an example that performs inference using the installed FuriosaAI SDK.

  1. Download the furiosa-sdk example through Git.

    Furiosa SDK repository clone command
     (my-env)$ git clone https://github.com/furiosa-ai/furiosa-sdk --depth 1
  2. Install the libraries needed to run the example.

    Library installation command
     (my-env)$ cd furiosa-sdk/examples/inferences
    (my-env)$ pip install -r requirements.txt
  3. Run the example code.

    Inference example program execution command
     (my-env)$ ./image_classify.py ../assets/images/car.jpg
    libfuriosa_hal.so --- v0.11.0, built @ 43c901f
    INFO:furiosa.common.native:loaded native library libnux.so (0.9.1 d91490fa8)
    Loading and compiling the model /home/ubuntu/furiosa-sdk/examples/inferences/../assets/quantized_models/imagenet_224x224_mobilenet_v1_uint8_quantization-aware-trained_dm_1.0_without_softmax.tflite
    Saving the compilation log into /home/ubuntu/.local/state/furiosa/logs/compile-20230516064013-fzjjxx.log
    Using furiosa-compiler 0.9.1 (rev: d91490fa8 built at 2023-04-19T13:49:26Z)
    2023-05-16T06:40:13.769147Z INFO nux::npu: Npu (npu0pe0-1) is being initialized
    2023-05-16T06:40:13.773685Z INFO nux: NuxInner create with pes: [PeId(0)]

    ...

    Prediction elapsed 0.00 secs
    [Top 5 scores:]
    sports car: 155
    pickup: 152
    car wheel: 148
    convertible: 148
    racer: 143
    2023-05-16T06:41:00.568028Z INFO nux::npu: NPU (npu0pe0-1) has been destroyed
    2023-05-16T06:41:00.568453Z INFO nux::capi: session has been destroyed