michael hohn 13a065545e Standardize container build workflow for v0.4.5
- Build Go binaries (gh-mrva, mrvaserver, mrvaagent) in source repos before
  copying to container dirs

- Apply consistent v0.4.5 tagging across all containers (ghmrva, vscode,
  hepc, server, agent)

- Update hepc to sync mrvahepc source with venv and .git exclusions

- Verify deployment with docker-compose-demo.yml and bin/ma.send-request
  tests
2025-11-26 12:39:59 -08:00
2025-11-25 15:39:57 -08:00
2025-08-19 11:22:20 -07:00
2025-05-12 12:00:12 -07:00

TODO Containerd / k3s version for kubernetes deployment

containerd setup

  # 
  # Test system using Docker semantics, then migrate to nerdctl
  # 

  # Install containerd
  sudo apt install -y containerd runc
  sudo systemctl enable --now containerd

  # Install nerdctl
  curl -LO https://github.com/containerd/nerdctl/releases/download/v2.2.0/nerdctl-2.2.0-linux-arm64.tar.gz
  tar xzf nerdctl-2.2.0-linux-arm64.tar.gz
  sudo mv nerdctl containerd-rootless*.sh /usr/local/bin/

  containerd-rootless-setuptool.sh install
  nerdctl 

  systemctl --user restart containerd.service

  # Install buildkit
  cd /usr/local
  sudo curl -LO https://github.com/moby/buildkit/releases/download/v0.26.1/buildkit-v0.26.1.linux-arm64.tar.gz
  sudo tar xzf buildkit-v0.26.1.linux-arm64.tar.gz 
  sudo rm buildkit-v0.26.1.linux-arm64.tar.gz

  containerd-rootless-setuptool.sh install-buildkit-containerd
  # systemctl --user (start|stop|restart) buildkit.service

  # Install cni bridge
  sudo mkdir -p /opt/cni/bin
  curl -LO https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-arm64-v1.8.0.tgz
  sudo tar -C /opt/cni/bin -xzf cni-plugins-linux-arm64-v1.8.0.tgz

  # Test nerdctl substitution for docker
  nerdctl run hello-world

TODO k8s manifest

Running the container collection is only needed for local use; docker is fine for that. For k3s etc., we don't need the docker-compose file.

Initial manifest in k8s/

Lima VM or linux native for running Docker with full control

Docker Desktop restricts low-level control, especially for tracing inter-container network traffic. To regain full visibility and system-level access, we run Docker inside a Lima VM (on mac) or on native linux instead.

lima vm setup

  # on mac
  brew install lima
  # see https://lima-vm.io/docs/installation/ for linux options

  # Create a vm
  cd ~/work-gh/mrva/mrva-docker/lima
  limactl create --name=ubu ubu-docker.yaml

  # setup
  cd ~/work-gh/mrva/mrva-docker/lima
  limactl start ubu
  limactl shell ubu

  # In vm or linux host
  sudo apt-get update
  sudo apt-get install docker.io
  sudo systemctl enable --now docker
  sudo systemctl restart docker
  sudo usermod -aG docker hohn
  newgrp docker
  sudo apt install docker-compose

  # on linux
  ln -s  /home/hohn/work-gh /home/hohn.linux/
  # on mac:
  ln -s  /Users/hohn/work-gh /home/hohn.linux/

  sudo apt-get install ngrep golang-go tshark postgresql-client-common

docker setup and run

  # 
  #* Build the containers
  # 
  # enter vm (on mac)
  limactl shell ubu

  #**  ghmrva
  cont_dir=~/work-gh/mrva/mrva-docker/containers/ghmrva/
  build_dir=~/work-gh/mrva/gh-mrva
  # 
  cd $build_dir
  go build
  # Copy the new binary
  cp gh-mrva $cont_dir
  #
  cd $cont_dir
  docker build -t mrva-gh-mrva:0.4.5 .

  #**  vscode
  cd ~/work-gh/mrva/mrva-docker/containers/vscode/
  docker build -t code-server-initialized:0.4.5 .

  #**  hepc
  # *slow* in vm with shared folders
  cd ~/work-gh/mrva/mrva-docker/containers/hepc
  rm -fR ./mrvahepc &&
      rsync -a --exclude='*/venv/*' \
            --exclude='*/\.git/*' \
            ../../../mrvahepc .
  docker build -t mrva-hepc-container:0.4.5 -f Dockerfile .

  #**  server
  cont_dir=~/work-gh/mrva/mrva-docker/containers/server/
  build_dir=~/work-gh/mrva/mrvaserver
  # 
  cd $build_dir
  go build
  # Copy the new binary
  cp mrvaserver $cont_dir
  #
  cd $cont_dir
  docker build -t mrva-server:0.4.5 .

  #**  agent
  cont_dir=~/work-gh/mrva/mrva-docker/containers/agent/
  build_dir=~/work-gh/mrva/mrvaagent
  # 
  cd $build_dir
  go build
  # Copy the new binary
  cp mrvaagent $cont_dir
  #
  cd $cont_dir
  docker build -t mrva-agent:0.4.5 .

  #
  #* Start the containers
  cd ~/work-gh/mrva/mrva-docker/
  docker-compose -f docker-compose-demo.yml down
  docker ps

  docker-compose -f docker-compose-demo.yml up 

  # # 
  # #* Populate the db container if needed.
  # # Note: this requires
  # #   /Users/hohn/work-gh/mrva/mrva-open-source-download ,
  # # a large tree not part of this repository.
  # #
  # # This is older. With the use of hepc, this should not be necessary.
  # # 
  # cd ~/work-gh/mrva/mrvacommander/client/qldbtools && uv sync
  # uv run bin/mc-db-populate-minio < scratch/db-info-3.csv

  #* update the binaries in a running container
  #** compile server locally
  cd ~/work-gh/mrva/mrvaserver
  go build

  # Copy the new binary
  cd ~/work-gh/mrva/mrvaserver
  docker cp mrvaserver mrva-server:/usr/local/bin/mrvaserver

  # Restart the binary
  docker exec mrva-server pkill mrvaserver

  #** recompile agent
  # compile locally
  cd ~/work-gh/mrva/mrvaagent
  go build

  # Copy the new binary
  cd ~/work-gh/mrva/mrvaagent
  docker cp mrvaagent mrva-agent:/usr/local/bin/mrvaagent

  # Restart the binary
  docker exec mrva-agent pkill mrvaagent

  #** gh-mrva
  # compile locally
  cd ~/work-gh/mrva/gh-mrva
  go mod edit -replace="github.com/GitHubSecurityLab/gh-mrva=/Users/hohn/work-gh/mrva/gh-mrva"
  go mod tidy 
  GOOS=linux GOARCH=arm64 go build

  # Copy the new binary
  cd ~/work-gh/mrva/gh-mrva
  docker cp gh-mrva mrva-ghmrva:/usr/local/bin/gh-mrva

set up traffic tracing

In a separate terminal:

  limactl shell ubu

  # in vm
  # capture traffic
  ip link show | grep br-
  # form:
  export BR1=br-442ef935d9f8
  export BR2=br-e2ebd32bfe66
  sudo tcpdump -i $BR1 -w trace-$BR1.pcap &
  sudo tcpdump -i $BR2 -w trace-$BR2.pcap &

  # Note docker0 is DOWN

  # send requests elsewhere

  # view full content:
  cp trace-$BR1.pcap /tmp/foo
  tshark -r /tmp/foo -Y 'http.request' -T fields -e http.request.method \
         -e http.host -e http.request.uri | grep '/repositories/'
  tshark -r /tmp/foo -Y http -V | less

  cp trace-$BR2.pcap /tmp/foo
  tshark -r /tmp/foo -Y 'http.request' -T fields -e http.request.method \
         -e http.host -e http.request.uri | grep '/repositories/'
  tshark -r /tmp/foo -Y http -V | less

  # should find these
  tshark -r /tmp/foo \
         -Y 'http.request.uri contains "/repos/"' \
         -T fields -e http.request.uri
  # should not find these
  tshark -r /tmp/foo \
         -Y 'http.request.uri contains "/repositories/"' \
         -T fields -e http.request.uri

send requests via cli

send requests via vs code plugin

strace containers and processes from lima

  # In the lima VM

  # #
  # # install strace in container, if in-container tracing is wanted
  # docker exec -it mrva-code-server bash
  # sudo apt update
  # sudo apt install -y strace

  # 
  # Find the container's main process PID:
  docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
  CID=mrva-code-server
  PID=$(docker inspect --format '{{.State.Pid}}' "$CID")
  echo $CID $PID

  #
  # Get the in-container process id.  Example:
  docker exec mrva-code-server ps -ef | grep -i 'node.*extensionhost'
  # coder        824      25  2 18:02 ?        00:01:06 /usr/lib/code-server/lib/node --dns-result-order=ipv4first /usr/lib/code-server/lib/vscode/out/bootstrap-fork --type=extensionHost --transformURIs --useHostProxy=false
  NDID=$(docker exec mrva-code-server ps -ef | grep -i 'node.*extensionhost' | awk '{print($2);}')
  echo $CID $PID $NDID

  # 
  # Run strace on the process in the container
  # using the host's strace
  sudo nsenter -t "$PID" -n -p \
       strace -f -tt -s 200 -e trace=connect,sendto,recvfrom -p 824 -o /tmp/strace-extensionhost.log
  # or
  sudo nsenter -t "$PID" -n -p -m \
       strace -f -tt -s 200 -e trace=connect,sendto,recvfrom,write,read \
       -p $NDID -o /tmp/strace-extensionhost.log
  # or trace the parent
  sudo nsenter -t "$PID" -n -p -m \
       strace -f -tt -s 200 -e trace=connect,sendto,recvfrom,write,read \
       -p 7 -o /tmp/strace-parent.log


  # # using the container's strace
  # sudo nsenter -t "$PID" -n -p -m \
  #      strace -f -tt -s 200 -e trace=connect,sendto,recvfrom -p 824 -o /tmp/strace-extensionhost.log

TODO For distribution, use single fat container or vm

The container configuration is only preparation for k8s deployment. For local use, a vm or "fat" docker container is sufficientr

TODO lima vm

intro

When dealing with a highly stateful, evolving system, development workflows that treat containers as immutable black boxes fall apart. Docker's model is great for microservices and stateless demos — but not for real systems where:

  • Executables change frequently (still coding)
  • Data must persist (and be inspected live)
  • Containers cannot be restarted casually (because they are the system)

Inside a single, well-managed VM we can

  • Mount real filesystems (/data, /code, /state) — no awkward volume plugins
  • Recompile and make install — no need to rebuild images
  • Keep all services running — no data loss
  • Log in and debug anything interactively

For local development of a complex, stateful system like MRVA, dumping Docker in favor of chroot or systemd-nspawn-style environments gives us:

  • Full control over state, logs, mounts
  • Zero rebuild delay
  • Native process inspection, debugging, and file editing
  • Persistent state without Dockers volume opacity
  • Easy replication of logical components via shell or Make

Current Logical Structure

Using a 2x3 terminal layout in iterm, we have

Window Directory
UL mrvaagent
UM mrvacommander
UR mrvahepc
LL mrvaserver
LM mrva-docker
LR vscode-codeql

Each of these corresponds to a separate Git repo, aligned with a Docker container.

This gives

  • Logical alignment between containers and repos
  • Physical separation (Docker images/filesystems) that's painful for development
  • Fast navigation and full visibility via iTerm2 panes

vm chroots from docker

The chroot will have the same directory structure as the Docker By following standard layout with debootstrap or debian:bullseye as base:

  /bin 
  /etc 
  /lib 
  /usr 
  /opt 
  /tmp 
  /var

This aligns precisely with what a Docker image would have. The only differences might be:

Path Docker chroot via debootstrap
/root present but unused optional, often empty
/home sometimes empty in both create it if needed
/proc, /sys managed by container runtime mount manually if needed

Compare to Docker

Feature VM + chroot setup Docker
Rebuild control Full, script-driven Layer cache voodoo
File system transparency Total Hidden layers
Tool version management Shared or isolated Always isolated
Dev→debug roundtrip Instant Context build/upload
Disk efficiency Optional Layered, rigid
Mental model File tree + script "Magic image"
Debug container during run Simple chroot Unnatural UX

Rebuild Cadence

Stage Scope Frequency Cost Notes
VM base image Full VM Rare (~1 per loop) Medium Clean slate; fast via Lima
VM tweaks Apt/tools 12 per loop Low Fully scripted
Chroot setup Per component 1 per loop Fast Includes system + tool setup
Component dev Go binary 10×+ per chroot Instant Local builds, bound mount ok
Full system test All chroots After major change MedHigh Manual or scripted

lima machine creation

  limactl  create -h
  # Create an instance of Lima
  limactl create --list-templates

  # create deb12
  limactl create                                  \
          --arch aarch64                          \
          --cpus 8                                \
          --disk 20                               \
          --memory 8.0                            \
          --name deb12                            \
          template://debian-12

  # start deb12
  limactl start deb12

  # enter deb12
  limactl shell deb12

  # admin
  limactl list

In

~/.lima/deb12/lima.yaml

add

  - location: "/Users/hohn/work-gh/mrva"
    writable: true

to the

mounts:

section. Then,

  limactl stop deb12
  limactl start deb12

TODO migrating the containers to chroot

Inside the lima vm

  # enter vm
  limactl shell deb12

  # expand setup scripts
  m4 common-setup.m4 agent-setup.m4 > setup-agent-chroot.sh
  m4 common-setup.m4 server-setup.m4 > setup-server-chroot.sh     
  m4 common-setup.m4 ghmrva-setup.m4 > setup-ghmrva-chroot.sh
  m4 common-setup.m4 mrvastore-setup.m4 > setup-mrvastore-chroot.sh

Using the Containers

Running the containers

The updated / combined instructions are in docker setup and run. The details:

  1. Build local container images; follow ./docker-compose-demo.yml

  2. Start the containers

      cd ~/work-gh/mrva/mrva-docker/
      docker-compose -f docker-compose-demo.yml down
      docker ps
      docker-compose -f docker-compose-demo.yml up
  3. View all logs

    docker-compose logs
    
  4. Follow all logs if started with -d

      docker-compose logs -f
  5. Follow single container, server, logging via

      cd ~/work-gh/mrva/mrvacommander
      docker-compose up -d
      docker-compose logs -f server
  6. Cleanup in case of obscure errors (network or other)

      docker-compose -f docker-compose-demo.yml down --volumes --remove-orphans
      docker network prune
      docker-compose -f docker-compose-demo.yml up --build

Updating binaries in running container

To update the binaries in a running container mainly during development:

  • server

      #* Cross-compile locally
      cd ~/work-gh/mrva/mrvaserver
      make msla
    
      #* check for running containers
      docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
    
      #* Copy the new binary
      cd ~/work-gh/mrva/mrvaserver
      docker cp mrvaserver mrva-server:/usr/local/bin/mrvaserver
    
      #* Restart the binary
      docker exec mrva-server pkill mrvaserver
  • agent

      #* Cross-compile locally
      cd ~/work-gh/mrva/mrvaagent
      make mala
    
      #* Look for the agent's name in the process table
      docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
    
      #* Copy the new binary
      cd ~/work-gh/mrva/mrvaagent
      docker cp mrvaagent mrva-agent:/usr/local/bin/mrvaagent
    
      #* Restart the binary
      docker exec mrva-agent pkill mrvaagent
  • gh-mrva

      #* Cross-compile locally
      cd ~/work-gh/mrva/gh-mrva
      go mod edit -replace="github.com/GitHubSecurityLab/gh-mrva=/Users/hohn/work-gh/mrva/gh-mrva"
      go mod tidy 
      GOOS=linux GOARCH=arm64 go build
    
      #* Look for the gh-mrva name in the process table
      docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
    
      #* Copy the new binary
      cd ~/work-gh/mrva/gh-mrva
      docker cp gh-mrva mrva-ghmrva:/usr/local/bin/gh-mrva

Use gh-mrva container to send request via cli

As shell script

Start container and check gh-mrva tool

  # Start an interactive bash shell inside the running Docker container
  docker exec -it mrva-ghmrva bash

  # Check if the gh-mrva tool is installed and accessible
  gh-mrva -h

Set up gh-mrva configuration

  # Create configuration directory and generate config file for gh-mrva
  mkdir -p ~/.config/gh-mrva
  cat > ~/.config/gh-mrva/config.yml <<eof
  # Configuration file for the gh-mrva tool
  # codeql_path: Path to the CodeQL distribution (not used in this setup)
  # controller: Placeholder for a controller NWO (not relevant in this setup)
  # list_file: Path to the repository selection JSON file

  codeql_path: not-used/codeql-path
  controller: not-used/mirva-controller
  list_file: $HOME/work-gh/mrva/gh-mrva/gh-mrva-selection.json
  eof

Create repository selection list

   # Create a directory and generate the JSON file specifying repositories
   mkdir -p ~/work-gh/mrva/gh-mrva
   cat > ~/work-gh/mrva/gh-mrva/gh-mrva-selection.json <<eof
   {
       "mirva-list": [
           "Serial-Studio/Serial-Studio",
           "UEFITool/UEFITool",
           "aircrack-ng/aircrack-ng",
           "bulk-builder/bulk-builder",
           "tesseract/tesseract"
       ]
   }
   eof

Create and submit the first query (FlatBuffersFunc.ql)

   # Generate a sample CodeQL query for functions of interest
   cat > ~/work-gh/mrva/gh-mrva/FlatBuffersFunc.ql <<eof
   /**
    ,* @name pickfun
    ,* @description Pick function from FlatBuffers
    ,* @kind problem
    ,* @id cpp-flatbuffer-func
    ,* @problem.severity warning
    ,*/

   import cpp

   from Function f
   where
     f.getName() = "MakeBinaryRegion" or
     f.getName() = "microprotocols_add"
   select f, "definition of MakeBinaryRegion"
   eof

   # Submit the MRVA job with the first query
   cd ~/work-gh/mrva/gh-mrva/
   gh-mrva submit --language cpp --session mirva-session-1172 \
             --list mirva-list                                \
             --query ~/work-gh/mrva/gh-mrva/FlatBuffersFunc.ql

Check status and download results for the first session

   # Check the status of the submitted session
   gh-mrva status --session mirva-session-1172

   # Download SARIF files and databases if there are results.  For the current
   # query / database combination there are zero result hence no downloads
   cd ~/work-gh/mrva/gh-mrva/
   gh-mrva download --session mirva-session-1172   \
           --download-dbs                          \
           --output-dir mirva-session-1172

Next, run a query with results

  #**  Set up QLPack for the next query
  # Create a qlpack.yml file required for the next query
  cat > ~/work-gh/mrva/gh-mrva/qlpack.yml <<eof
  library: false
  name: codeql-dataflow-ii-cpp
  version: 0.0.1
  dependencies:
    codeql/cpp-all: 0.5.3
  eof

  #**  Create and submit the second query (Fprintf.ql)
  # Generate a CodeQL query to find calls to fprintf
  cat > ~/work-gh/mrva/gh-mrva/Fprintf.ql <<eof
  /**
   ,* @name findPrintf
   ,* @description Find calls to plain fprintf
   ,* @kind problem
   ,* @id cpp-fprintf-call
   ,* @problem.severity warning
   ,*/

  import cpp

  from FunctionCall fc
  where
    fc.getTarget().getName() = "fprintf"
  select fc, "call of fprintf"
  eof

  # Submit a new MRVA job with the second query
  cd ~/work-gh/mrva/gh-mrva/
  gh-mrva submit                                      \
          --language cpp --session mirva-session-2261 \
          --list mirva-list                           \
          --query ~/work-gh/mrva/gh-mrva/Fprintf.ql

Check status and download results for the second session

  # Check the status of the second session
  gh-mrva status --session mirva-session-2261

  # Download SARIF files and databases for the second query
  cd ~/work-gh/mrva/gh-mrva/
  gh-mrva download --session mirva-session-2261   \
          --download-dbs                          \
          --output-dir mirva-session-2261

  ls -l mirva-session-2261

Send request via gui, using vs code

The following sequence works when run from a local vs code with the custom codeql plugin.

Connect to vscode-codeql container at http://localhost:9080/?folder=/home/coder

Provide settings

The file

/home/coder/.vscode/settings.json
  cat > /home/coder/.vscode/settings.json << EOF
  {
      "codeQL.runningQueries.numberOfThreads": 2,
      "codeQL.cli.executablePath": "/opt/codeql/codeql",

      "codeQL.variantAnalysis.enableGhecDr": true,
      "github-enterprise.uri": "http://server:8080/"
  }
  EOF

Provide list of repositories to analyze

ql tab > variant analysis repositories > {}, put this into databases.json

  {
      "version": 1,
      "databases": {
          "variantAnalysis": {
              "repositoryLists": [
                  {
                      "name": "mrva-list",
                      "repositories": [
                          "Serial-Studio/Serial-Studio",
                          "UEFITool/UEFITool",
                          "aircrack-ng/aircrack-ng",
                          "bulk-builder/bulk-builder",
                          "tesseract/tesseract"
                      ]
                  }
              ],
              "owners": [],
              "repositories": []
          }
      },
      "selected": {
          "kind": "variantAnalysisUserDefinedList",
          "listName": "mirva-list"
      }
  }

Make the list current

ql tab > variant analysis repositories > 'select' mrva-list

Provide a query

Select file qldemo/simple.ql and put Fprintf.ql parallel to it:

  cat > /home/coder/qldemo/Fprintf.ql <<eof
  /**
   ,* @name findPrintf
   ,* @description find calls to plain fprintf
   ,* @kind problem
   ,* @id cpp-fprintf-call
   ,* @problem.severity warning
   ,*/

  import cpp

  from FunctionCall fc
  where
    fc.getTarget().getName() = "fprintf"
  select fc, "call of fprintf"
  eof
  /**
   ,* @name findPrintf
   ,* @description find calls to plain fprintf
   ,* @kind problem
   ,* @id cpp-fprintf-call
   ,* @problem.severity warning
   ,*/

  import cpp

  from FunctionCall fc
  where
    fc.getTarget().getName() = "fprintf"
  select fc, "call of fprintf"

Provide the qlpack specification

Create qlpack.yml for cpp:

  cat > /home/coder/qldemo/qlpack.yml <<eof
  library: false
  name: codeql-dataflow-ii-cpp
  version: 0.0.1
  dependencies:
    codeql/cpp-all: 0.5.3
  eof

Then

  1. Delete qlpack.lock file
  2. In shell,

      cd ~/qldemo
      /opt/codeql/codeql pack install
  3. In GUI, 'install pack dependencies'
  4. In GUI, 'reload windows'

Submit the analysis job

Fprintf.ql > right click > run variant analysis

Inspect postgres container

From host

  docker volume ls
  DRIVER    VOLUME NAME
  local     4d1061ac2749d8e22000a40826a821ff22e43458ebed78dd3421b7c759bd978d
  local     6611a5f26d9e146c68a8a646bdae01646d1501e4e40424a19490359b666bbbca
  local     17132cc46232508330946a536b4a0edad643d7accb611f1f5e4b48193253e2a0
  local     e1e57867c7c0cf9ac508835a1d1800770d578ae6766fc935dda40628ce229ce0
  local     mrva_pgdata

In container

  # Start an interactive bash shell inside the running Docker container
  docker exec -it mrva-postgres bash

  # Connect to the DB using psql
  psql -U mrva -d mrvadb
  {
      # inspect using pg repl

      \dt
      #             List of relations
      #  Schema |      Name       | Type  | Owner
      # --------+-----------------+-------+-------
      #  public | analyze_jobs    | table | mrva
      #  public | analyze_results | table | mrva
      #  public | job_info        | table | mrva
      #  public | job_repo_map    | table | mrva
      #  public | job_status      | table | mrva
      #  (5 rows)
      SELECT * FROM analyze_jobs;

      SELECT * FROM analyze_results;

      SELECT * FROM job_info;

      SELECT * FROM job_repo_map;

      SELECT * FROM job_status;

      SELECT last_value FROM session_id_seq;

      SELECT * FROM session_id_seq;

      # # may fail if there isn't one
      # SELECT nextval('session_id_seq');

      # clear all postgres databases.  This will require restarting the server
      # container to recreate the tables
      DROP SCHEMA public CASCADE;
      CREATE SCHEMA public;

      # or more selectively
      DROP TABLE IF EXISTS  analyze_jobs CASCADE;
      DROP TABLE IF EXISTS  analyze_results CASCADE;
      DROP TABLE IF EXISTS  job_info CASCADE;
      DROP TABLE IF EXISTS  job_repo_map CASCADE;
      DROP TABLE IF EXISTS  job_status CASCADE;

  }
  # inspect from shell
  pg_dump -U mrva -d mrvadb --data-only |less

  #

Update Container Images

XX:

grep 'docker tag' containers/*/*.org containers/*/Makefile
(grep "grep --color=auto -nH --null -e 'docker tag' containers/*/*")
  # To snapshot a running Docker container and create a new image from it, use the
  # following CLI sequence: 

  #* Get the container IDs

  docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
  # 0:$ docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
  # CONTAINER ID   IMAGE                                         NAMES
  # 99de2a875317   ghcr.io/hohn/client-ghmrva-container:0.1.24   mrva-docker-client-ghmrva-1
  # 081900278c0e   ghcr.io/hohn/mrva-server:0.1.24               server
  # a23352c009fb   ghcr.io/hohn/mrva-agent:0.1.24                agent
  # 9e9248a77957   minio/minio:RELEASE.2024-06-11T03-13-30Z      dbstore
  # cd043e5bad77   ghcr.io/hohn/code-server-initialized:0.1.24   mrva-docker-code-server-1
  # 783e30d6f9d0   rabbitmq:3-management                         rabbitmq
  # d05f606b4ea0   ghcr.io/hohn/mrva-hepc-container:0.1.24       hepc
  # 7858ccf18fad   ghcr.io/hohn/dbsdata-container:0.1.24         dbssvc
  # 85d85484849b   minio/minio:RELEASE.2024-06-11T03-13-30Z      artifactstore

  #* Commit the running containers to new images
  # Commit the running container to a new image:
  ( cd ~/work-gh/mrva/mrva-docker/ && rg 'docker (commit)' )

  docker commit 99de2a875317 mrva-client-ghmrva:0.2.0 
  # sha256:2eadb76a6b051200eaa395d2f815bad63f88473a16aa4c0a6cdebb114c556498

  docker commit 081900278c0e   mrva-server-server:0.2.0
  # sha256:0ec38b245021b0aea2c31eab8f75a9141cce8ee789e406cec4dabac484e03aff

  docker commit a23352c009fb   mrva-server-agent:0.2.0
  # sha256:75c6dee1dc57cda571482f7fdb2d3dd292f51e423c1733071927f21f3ab0cec5

  docker commit cd043e5bad77   mrva-client-vscode:0.2.0
  # sha256:b239d13f44637cac3601697dca49325faf123be8cf040c05b6dafe2b11504cc8

  docker commit d05f606b4ea0   mrva-server-hepc:0.2.0
  # sha256:238d39313590837587b7bd235bdfe749e18417b38e046553059295cf2064e0d2

  docker commit 7858ccf18fad   mrva-server-dbsdata:0.2.0
  # sha256:a283d69e6f9ba03856178149de95908dd6fa4b6a8cf407a1464d6cec5fa5fdc0

  #* Verify the newly created images
  docker images

  #* Tag the images for a registry
  ( cd ~/work-gh/mrva/mrva-docker/ && rg 'docker (tag)' )

  tagpushimg () {
      name=$1
      version=$2

      docker tag $name:$version ghcr.io/hohn/$name:$version
      docker push ghcr.io/hohn/$name:$version
  }

  tagpushimg mrva-client-ghmrva 0.2.0

  tagpushimg mrva-server-server 0.2.0

  tagpushimg mrva-server-agent 0.2.0

  tagpushimg mrva-client-vscode 0.2.0

  tagpushimg mrva-server-hepc 0.2.0

  tagpushimg mrva-server-dbsdata 0.2.0

view container image list on ghcr.io: https://github.com/hohn?tab=packages

Project Tools

This project, mrva-docker, is the highest-level part of the project as it packages all others. So it also houses simple project tools.

  # On macos

  # install uv
  curl -LsSf https://astral.sh/uv/install.sh | sh
  uv self update

  # set up mrva-env on mac
  cd ~/work-gh/mrva/mrva-docker
  uv venv mrva-env-mac

  # activate mrva-env
  source mrva-env-mac/bin/activate

  # link scripts (lazy 'install')
  cd  mrva-env-mac/bin/
  ln -s ../../bin/* .

Access minio

  • command line

      # 
      brew install minio/stable/mc  # macOS
      # or
      curl -O https://dl.min.io/client/mc/release/linux-amd64/mc && chmod +x mc && sudo mv mc /usr/local/bin/
    
      # Configuration
      MINIO_ALIAS="qldbminio"
      MINIO_URL="http://localhost:9000"
      MINIO_ROOT_USER="user"
      MINIO_ROOT_PASSWORD="mmusty8432"
      QL_DB_BUCKET_NAME="qldb"
    
      # Configure MinIO client
      mc alias set $MINIO_ALIAS $MINIO_URL $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD
    
    
      # List everything uploaded under session 5
      mc ls qldbminio/mrvabucket | grep '^5-'
    
      # Drill into each expected result
      mc ls local/mrvabucket/5-{Serial-Studio Serial-Studio}
      mc ls local/mrvabucket/5-{UEFITool UEFITool}
      mc ls local/mrvabucket/5-{aircrack-ng aircrack-ng}
      mc ls local/mrvabucket/5-{bulk-builder bulk-builder}
      mc ls local/mrvabucket/5-{tesseract tesseract}
  • web console http://localhost:9001/browser
Description
No description provided
Readme Apache-2.0 1 GiB
Languages
Shell 42.9%
Dockerfile 26.4%
Makefile 10.2%
M4 9.7%
Python 9.4%
Other 1.4%