Docker image not working

Hi Chris,

I uninstalled Docker Desktop and installed Colima.

I ran the docker build and docker run commands and there is a container running, I just cannot access it and I have no clue where to look for a log. Docker Desktop showed me the log and errors in there. As I had to de-install Docker Desktop I no longer have that luxury.

Seems this should be able to run and installing all these tools makes me think this is not the right way, else the guide from Adam would have mentioned these things as a pre-requisite.

And I must be honest here, I have no clue what I am doing in regards with Colima or Rosetta and which would maybe help out.

@adam : can you help here? Is this docker image supposed to be able to run on MacOS as host? The error messages posted above all call out some function/method not being available.

Above all I want to avoid wasting everybody’s time if this image was never supposed to run on MacOS.

@Steven you need docker desktop as well as colima. Colima uses docker and is just the runtime for the docker containers… I am running psu on a M1 mac using docker and Colima.

Process:

  1. Install docker and docker desktop.
  2. Install Colima.
  3. Start colima. colima start
  4. Start the psu docker container.

I typically use this method when images are not compatible with the m1 chip, but see if this works for you.

System Version: macOS 13.2.1 (22D68)
Chip: Apple M1 Pro
colima list 

PROFILE    STATUS     ARCH      CPUS    MEMORY    DISK     RUNTIME    ADDRESS
default    Running    x86_64    2       4GiB      60GiB    docker     
docker ps

CONTAINER ID   IMAGE                         COMMAND                  CREATED              STATUS              PORTS                                       NAMES
ad3117b52f63   universal-persistent:latest   "./Universal/Univers…"   About a minute ago   Up About a minute   0.0.0.0:5000->5000/tcp, :::5000->5000/tcp   powershelluniversal

Hi Chris,

I downloade dthe silicon version of Docker Desktop and installed it again.
Removed all images and Containers and ran the docker build again.

Docker Desktop shows neither the Image nor the Container.

colima list             
PROFILE    STATUS     ARCH       CPUS    MEMORY    DISK     RUNTIME    ADDRESS
default    Running    aarch64    2       2GiB      60GiB    docker

I see that “ARCH” output is different from yours, is that significant perhaps?

Maybe…

You can try and start colima with a different ARCH.

colima start --arch x86_64

Hi Chris,

I did that, and I have no idea if it worked or what the error might be if not working.
The Container seems to be running but I cannot check or do not know how to check.

I did do this command as I got an error running the image:

lsof -iTCP:6002 -iUDP:6002 -n -P

COMMAND  PID              USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
ssh     3058 stevenspierenburg   16u  IPv4 0x670cdcec06e29365      0t0  TCP *:6002 (LISTEN)

No idea why this port 6002 is all of a sudden “ssh”?

Trying to run the image:

docker run -it --name powershelluniversal --mount source=psudata,target=/home/data --rm -d -p 6002:6002/tcp universal-persistent:latest

Results:

docker: Cannot connect to the Docker daemon at unix:///Users/stevenspierenburg/.colima/default/docker.sock. Is the docker daemon running?.
See ‘docker run --help’.

I have a feeling I am going downhill and getting bad results. This is supposed to be easy, if I understood this right, but this is increasingly more difficult. Did I break my system? Do I need to undo what I installed or set?

I’m sorry, I must sound like a whiner and in a way I am. This is so frustrating and makes me think: am I too invested in this PSU as solution? Should I look around and find another product that fights me a little less? So I must sound like a whiner and I apologise for it.

And before I buy this product, I have some doubts about these issues and start to wonder if support is better if paid for it. I suppose it is, makes sense. But buying a product where I face immediate blockades and no resolution in sight does not make for a good start.

I can see now that the issue of

Cannot connect to the Docker daemon at unix:///Users/stevenspierenburg/.colima/default/docker.sock. Is the docker daemon running?

Was due to Colima not running at that moment.

I removed everything, re-installed Docker and threw away the older image and re-build the image.

I used:

docker build . --tag=universal-persistent

The dockerfile contains:

FROM ironmansoftware/universal:latest
LABEL description=“Universal - The ultimate platform for building web-based IT Tools”
EXPOSE 5000
VOLUME [“/home/data”]
ENV Data__RepositoryPath /home/data/Repository
ENV Data__ConnectionString /home/data/database.db
ENV UniversalDashboard__AssetsFolder /home/data/UniversalDashboard
ENV Logging__Path /home/data/logs/log.txt
ENTRYPOINT [“./Universal/Universal.Server”]

The run command is:

docker run -it --name powershelluniversal --mount source=psudata,target=/home/data --rm -d -p 6002:6002/tcp universal-persistent:latest

Still nothing in the browser to acces though. Where do I need to look for the log files to investigate?

Hi @Steven,

Once you have your M1 chip issues sorted, that compose script I posted should work.

According to your build config above you are exposing port 5000 which is correct, however when you go to run your built container you pipe port 6002 to 6002 which is not exposed.

Please note that your build is only rebuilding whatever Adam has already built in his published docker images.

I can also see that your storage is mounted to psudata. Have you created a storage mount in docker called psudata? Otherwise, you should be able to replace that with a folder path.

Hi Matt,

I thought I had to stay away from the EXPOSE in the dockerfile because Adam already ‘did something’?

Can you tell me what is the right way to start this all? Do I need a dockerfile? Do I need to assign ports?
And storage, no idea. I want to start simple, effectively have it working and then work my way up to more complex issues, like storage. We can omit if that is better?

I cannot use port 5000 as that is already claimed by a core process in MacOS.

Hi @Steven,

2 processes are being mixed here. The build process, and the deployment process.

My understanding is you wish to deploy PSU.

The Build Process

Adam has already built PSU for you, which can be found on docker hub. Which is where the image link in your compose file points to.

The Dockerfile is used in the build process, so you do not need to write a Dockerfile. The Dockerfile is where you expose ports you want the container you build to be open (exposed).

For the purposes of getting PSU working, you do not need to build anything. I run multiple PSU environments fully licenced with SQL and everything and I do not need to build anything at my end.

The Deploy process

This is the part of docker you need to focus on to get PSU running. There are 2 ways to run a Docker Container:

  1. Use the docker run command on the terminal
  2. Use a Compose template

Both do the same thing however you will want to use compose files so all your settings are in 1 file you can easily backup and not have to go through your terminal history looking for old commands checking you are in the correct directory.

If your M1 issues are now sorted through reinstallation, I would advise you repeat the hello world exercise to confirm everything is working correctly on the docker end.

Once that is done, open up a terminal and navigate to your docker-compose.yml file.

Once you find it, confirm it matches what we have in the above posts and run the compose file by running docker compose up -d

Things to look out for in your compose file

  1. As previously mentioned, you do not need to worry about the build process or a docker file. Adam has already done this for you and posted the image on docker hub. This is referenced on the image line on the compose file.

  2. The expose 5000 line is part of Dockerfile in the build file and therefore, is out of scope of getting the container to work. If you check the final line on the docker compose file, you will see a port mapping of 6002:5000. This means that the container port of 5000 will be mapped to port 6002 of your laptop.

    You could have multiple instances of PSU running on your laptop, and all you would need to do is change/increment port 6002 on each container config in your compose file to be able to connect to those instances.

Hope this helps.

Matt.

Hi Matt,

Finally, success!!!

I have a compose yaml file, like you earlier in this thread suggested:

---
version: "3"
services:
  PowershellUniversal:
    container_name: PowershellUniversal
    image: ironmansoftware/universal:3.7.14-ubuntu-20.04
    restart: unless-stopped
    volumes:
      - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu:/root
    ports:
      - "6002:5000"

Then ran the docker compose command and it worked!

Now I have to figure out what is truly needed to made this work. I think it is like this.

  1. Install docker (docker desktop or brew to install it)
  2. Install Colima
  3. Run Colima with ARCH type x86_64, like so
    colima start --arch x86_64
  4. Create a yaml file for docker compose, example is above
  5. Make sure the folder structure under the folder where the compose yaml file is like so:
    /your_folder/volumes/psu
    Basically the “volumes:” folder structure part from the compose yaml file needs to exist.
  6. In the folder with the compose yaml file, start docker compose like so:
    docker compose up -d

I think I have it right like so, do you see anything odd in there?

Some questions if I may:

  1. What if I do want to change the image and create my own version of it, what would I need to do?
  2. Is there a way to check the status of the Container(s) I started with docker compose? Logs?
  3. Will the “volumes:” part o the yaml file ensure the started Container saves all config and data there? If I start 5 PSU Containers, will they all have their own data saved, separate from the others Containers?
  4. I guess by using Colima I have no benefit of the Docker Desktop anymore? Is there any other GUI management tool I could look into?

Again Matt, super thanks for the help and getting me there!
And @LiquoriChris : your help was invaluable as well, to get the M1 issue out of the way!

Hi @Steven,

Good to hear you got it working.

In answer to your questions:

  1. If you want to change the image and make your own, then I would read up on the docker build process. This would include an introduction to Dockerfiles and builds. You would also need a container registry to upload your containers to for deployment to other devices. Personally, I am unsure what you would want to add to your container. The /root folder on the container which is where your volume folder is mounted is where all the changeable settings are heald. I have tried very hard to work around any need for building a custom container so I do not have to manage and support a container build process. that way i know that whatever image version i have is supported.

  2. Docker Compose is a template deployment mechanism only. To check on your deployed containers from the terminal, you can run docker compose ps. This will give you a list of deployed containers.

  3. If you decide to spin up a 2nd instance then you will need to create a new folder for that instance, e.g. /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu1. All that instance’s data would be stored in that folder. For education sake, if you wanted to run 5 instances your compose file would look like this:

    ---
    version: "3"
    services:
      PowershellUniversal:
        container_name: PowershellUniversal
        image: ironmansoftware/universal:3.7.14-ubuntu-20.04
        restart: unless-stopped
        volumes:
          - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu:/root
        ports:
          - "6002:5000"
      PowershellUniversal2:
        container_name: PowershellUniversal2
        image: ironmansoftware/universal:3.7.14-ubuntu-20.04
        restart: unless-stopped
        volumes:
          - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu2:/root
        ports:
          - "6003:5000"
      PowershellUniversal3:
        container_name: PowershellUniversal3
        image: ironmansoftware/universal:3.7.12-ubuntu-20.04
        restart: unless-stopped
        volumes:
          - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu3:/root
        ports:
          - "6004:5000"
      PowershellUniversal4:
        container_name: PowershellUniversal4
        image: ironmansoftware/universal:3.7.12-ubuntu-20.04
        restart: unless-stopped
        volumes:
          - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu4:/root
        ports:
          - "6005:5000"
      PowershellUniversal5:
        container_name: PowershellUniversal5
        image: ironmansoftware/universal:3.7.14-ubuntu-20.04
        restart: unless-stopped
        volumes:
          - /Users/stevenspierenburg/Temp/Powershell Universal/docker/volumes/psu5:/root
        ports:
          - "6006:5000"
    

    To demonstrate the power here. I have added 4 containers. You will notice the following are unique:
    1.1. Service instance name
    1.2. Container name
    1.3. Volume path to mount root on (This will create additional folders in your docker/volumes folder
    1.4. The port number to access it on. Note that 5000 stays the same, as that is what Adam exposed on his build.

    I have also mixed up the version numbers on the image a bit to demonstrate you can have multiple copies of multiple versions if you wish.

    If you ever wish to stop these containers, you can run a docker compose down to stop them.

    Also, at some point, you will want to update images to newer versions. To do this while the containers are running, you will need to:
    1.1. Identify the version you wish to go to and locate the image on docker hub. This bookmark filters the correct type of image for what you have deployed: Docker
    1.2. Decide what container or containers you wish to update out of your 5 listed containers
    1.3. Replace the version numbers for those containers and save the docker compose file.
    1.4. From the terminal run Docker compose pull. This will pull any images you do not have, which are listed in your compose file.
    1.5. Once those files are downloaded, you can then again run docker compose up -d and that will re-create the containers which have been modified, leaving the remaining ones running.

    You may wish to play about with the version numbers on some of the containers to get the idea of this.

    One bonus on this method is that if you upgrade to a new image, and you realize something like the dashboard navigation is faulty, you can modify the image line to revert to your previous build. Then, when you run docker compose up -d your container will run on the previous version you have specified until that bug can be fixed.

  4. I am afraid I am not an Apple user, so I do not know any of the apple programs. You may wish to look for some YouTube videos on container. This is a container, which is a management GUI for containers, which is quite popular. But then again, something better for the Apple ecosystem might exist.

Hi Steven,

Some additional reading for you:

  1. What is a docker-compose.yml file? - Stack Overflow
  2. Docker Compose Explained. Learn how to create a YAML file for… | by Bharathiraja | CodeX | Medium

Hi Matt,

Super information you have given me, appreciated!

I will post my findings on making Docker work with PSU images on MacOS with Apple M1 Pro/Max chipsets. I hope this will guide others on how to possibly achieve this.
I have 2 methods, one with Docker Desktop and one with Colima.
It’s not meant to be definitive but it will get you others on the right path, I hope.

===================================================
Docker Installation
How to run x86 containers on Apple Mac M1 Pro

  1. Install Rosetta 2 (Intel Machine capabilities for Apple Silicone M1)
    softwareupdate --install-rosetta

  2. Install docker (docker desktop or brew to install it)
    a) www.docker.com and download the DMG file
    b) brew install --cask docker

  3. Configure Docker Desktop to support x86 platforms
    https://collabnix.com/warning-the-requested-images-platform-linux-amd64-does-not-match-the-detected-host-platform-linux-arm64-v8/
    a) Go to the Settings > section and check the box to enable the Apple Virtualization framework
    b) Under the “Feature in development section”, enable Rosetta

  4. Create a yaml file for docker compose, example is above
    Make sure you have the platform option in that yaml file or via the command line:
    platform: linux/amd64

  5. Make sure the folder structure under the folder where the compose yaml file is like so:
    /your_folder/volumes/psu
    Basically the “volumes:” folder structure part from the compose yaml file needs to exist.

  6. In the folder with the compose yaml file, start docker compose like so:
    docker compose up -d
    –or–
    docker run --platform linux/amd64 etc…

  7. To stop the Container, run this compose command:
    docker compose stop
    –or–
    docker stop <container_id>

===================================================
Docker Installation with Colima as alternative

  1. Install Rosetta 2 (Intel Machine capabilities for Apple Silicone M1)
    softwareupdate --install-rosetta

  2. Install docker (docker desktop or brew to install it)
    a) www.docker.com and download the DMG file
    b) brew install --cask docker

  3. Install Colima
    brew install colima

  4. Run Colima with ARCH type x86_64, like so
    colima start --arch x86_64

  5. Create a yaml file for docker compose, example is above

  6. Make sure the folder structure under the folder where the compose yaml file is like so:
    /your_folder/volumes/psu
    Basically the “volumes:” folder structure part from the compose yaml file needs to exist.

  7. In the folder with the compose yaml file, start docker compose like so:
    docker compose up -d

  8. To stop the Container, run this compose command:
    docker compose stop

Hi @Steven,

This looks good, maybe it will make it into the documentation. It may not be a bad move to have the documentation broken down into build and deployment processes to help enable engineers to get something working out of the box and then, move on to building your own container down the line if you feel you need to do it.

I’m trying not to overload you with too much info, which you get your head around how docker deployment and management works. It would be good to hear how you get on. Now that you have a compose file and some container volumes. Those in theory can be transferred to any device running docker and run assuming the volume paths, etc are modified.

I’m not sure what @LiquoriChris option is on this, however It’s interesting that the ARCH flag needs to be changed. Back in the day, Apple used to be PPC (PowerPC) architecture. Which, I saw referenced. It looks like that has now been changed to X86 however, Wikipedia is saying that this is an ARM8 chip (I wish I looked this up a few days ago) which may explain why there are some M1 gremlins.

That’s awesome you got it to work. As far as the M1 mac and colima at start… I do not give the --arch option. I just run colima start which starts it with x86_64 as the arch. Not sure of the inner workings around it, but that is what worked for me.

Perhaps you had Colima configured before? You have to only give an option once and it will remember that option.

I also gave a way to do this without Colima, just some settings in Docker Desktop in combination with Rosetta 2.

Again, thanks a lot for all your help!

HI @adam,

I have come across a similar problem to this with M1 chips.

In the build process, do you use buildx?

Our docker build is super simple. No buildx. Our dockerfile is:

# Docker image file that describes an Ubuntu18.04 image with PowerShell installed from Microsoft APT Repo
ARG fromTag=ubuntu-20.04
ARG imageRepo=mcr.microsoft.com/powershell
FROM ${imageRepo}:${fromTag} AS installer-env

ARG VERSION=1.3.1
ARG PACKAGE_URL=https://imsreleases.blob.core.windows.net/universal/production/${VERSION}/Universal.linux-x64.${VERSION}.zip
ARG DEBIAN_FRONTEND=noninteractive 

# Install dependencies and clean up
RUN apt-get update \
    && apt-get install -y apt-utils 2>&1 | grep -v "debconf: delaying package configuration, since apt-utils is not installed" \
    && apt-get install --no-install-recommends -y \
    # curl is required to grab the Linux package
    curl \
    # less is required for help in powershell
    less \
    # requied to setup the locale
    locales \
    # required for SSL
    ca-certificates \
    gss-ntlmssp \
    # PowerShell remoting over SSH dependencies
    openssh-client \
    unzip \
    libc6-dev \
    tzdata \
    # Download the Linux package and save it
    && echo ${PACKAGE_URL} \
    && curl -sSL ${PACKAGE_URL} -o /tmp/universal.zip \
    && unzip /tmp/universal.zip -d ./home/Universal || : \
    # remove powershell package
    && rm /tmp/universal.zip \
    && chmod +x ./home/Universal/Universal.Server

# Use PowerShell as the default shell
# Use array to avoid Docker prepending /bin/sh -c
WORKDIR /home
EXPOSE 5000
ENTRYPOINT ["./Universal/Universal.Server"]

The build command is.

param($Version = (Get-Content "$PSScriptRoot/../../../version.txt" -Raw).Trim())
Set-Location $PSScriptRoot
docker build . --tag=universal --build-arg VERSION=$Version

We effectively download the specified version of PSU, extract it into the container and then setup a couple settings like expose and entrypoint.

Hi @adam,

I have noticed that all your builds are amd64 including the ARM builds:

I suspect M1 devices do not have an emulator for this.

Docker buildx allows multi arch builds all in the one image release:

However, from your Dockerfile I can see you’re using a ‘Microsoft PowerShell on Ubuntu’ container:

Which, explains the extra emulation layer needed on M1 devices.
I can see there are some arm64 builds of your source container which I suspect would allow an M1 device to work ‘Out of the box’:

.

It looks like the MCR repo does not support buildx. However, By running multiple ‘docker build’ runs I suspect swapping out ARM image for an arm64 arch image would fix the M1 problem.
I suspect you would also need to build for arm32/‘arm’ users too.

From what I have read, removing the emulation layer and providing arch matching builds tends to drop the power consumption.

I’m happy to discuss this more if you are interested in publishing arch specific builds.