Showing posts with label docker-compose. Show all posts
Showing posts with label docker-compose. Show all posts

Friday, July 24, 2020

Change ACS 6.x/7.x, Share 6.x/7.x, Proxy (nginx), Solr6 and DB (postgres) ports using docker-compose.yml and DockerFile


It is a very common requirement to use different set of available port (s) as per company policy rather than using default port (s) for the applications.


Background:


Before container based environments, we had to follow below given steps in order to change the ports (these points still applies to an environment setup via distribution package):

    • Update default connector ports 8080, 8443, 8009 and 8005 to required ports.
    • Sometimes we use JPDA_ADDRESS for remote debug which is default '8000' in $ALFRESCO_INSTALL_DIR/tomcat/bin/catalina.sh. If we use remote debug, them update to required port as needed.
  • Update the required 'alfresco' and 'share' ports in $ALFRESCO_INSTALL_DIR/tomcat/shared/classes/alfresco-global.properties
  • Update the required 'alfresco' ports in $ALFRESCO_INSTALL_DIR/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml for remote configuration (<config evaluator="string-compare" condition="Remote">).
        • http://localhost:{REQUIRED_PORT}/alfresco/s --> DEFAULT: 8080
      • Update 'alfresco' endpoint url: 
        • http://localhost:{REQUIRED_PORT}/alfresco/s --> DEFAULT: 8080
      • Update 'alfresco-feed' endpoint url: 
        • http://localhost:{REQUIRED_PORT}/alfresco/s --> DEFAULT: 8080
      • Update 'alfresco-api' endpoint url: 
        • http://localhost:{REQUIRED_PORT}/alfresco/api --> DEFAULT: 8080
  • Update the required 'alfresco' port in 'solrcore.properties' file.
    • Find the 'alfresco.port' property in solrcore.properties file and update:
      • alfresco.port=<requiredPort>, default : 8080
    • For SOLR4, we used below paths:
      • $ALFRESCO_INSTALL_DIR/solr4/workspace-SpacesStore/conf/solrcore.properties
      • $ALFRESCO_INSTALL_DIR/solr4/archive-SpacesStore/conf/solrcore.properties
      • $SOLR_HOME/solrhome/alfresco/conf/solrcore.properties
      • $SOLR_HOME/solrhome/archive/conf/solrcore.properties



All the above given steps will be almost same for ACS 6.x as well if you are using standalone installation and not managing the services, images and containers via docker based deployment. 

When using docker based deployment, we use docker-compose.yml file to configure all the services which will be used as a base for launching the corresponding containers. We configure all the required ports (host and container ports) in the docker-compose.yml file and expose any ports if required either via docker-compose.yml or DockerFile.

It is possible to change the host ports via docker-compose.yml file but default ports (container ports) which are exposed within docker images (specially connector ports in tomcat which is shipped with acs and share images) can't be changed via docker-compose.yml alone. We have to take help of DockerFile, which can be used to update required ports at the time of build process.

Similarly, if you are using proxy (nginx) then 'ngnix.conf' configuration also needs an update to reference the required ports. By default ngnix will try to forward all requests to '8080' which is default port for acs and share.

It will be like re-builing the original images (acs, share, proxy etc. images) with updated ports and containers will be launced using the updated images. 

For some of the servives such as 'postgres', you can change the default port directly from docker-compose.yml as it gets access to command line, it is like executing 'postgres -p 5433' via command line. 
We can simply pass the command line param '-p <requiredPort>' or use 'expose' option in docker-compose.yml in the 'postgres' service definition.

For 'solr6 (alfresco-search-service)', we can either update the startup script or update the shared.properties via DockerFile or add SOLR_PORT environment variable in docker-compose.yml. This env variable will be used by jetty server to start service on required port. 
Additionally, you can also pass Jvm param using JAVA_OPTS, e.g. -Djetty.port=9999

Change Alfresco, Share, Nginx (Proxy), Solr and Database (postgres) ports with help of DockerFile and docker-compose.yml:


Considering the aforementioned steps for changing the ports, we need to follow the same  approach for docker based deployment as well but with help of docker-compose.yml and DockerFile.

I will be using port '7080' instead of '8080' for acs, share and proxy. I will also update the tomcat connector ports to 7005, 7009 and 7443. I will use '5555' instead of '5432' for postgres and '9999' instead of '8983' for solr6.


Here are default ports:

Service

Default Ports

Note

Tomcat connector ports

8005, 8080, 8443, 8009

Default within tomcat shipped with alfresco and share images.

alfresco

8080

 

share

8080

 

proxy

80, 8080 -> 8080

Default port on proxy(nginx) is 80, where port 8080 is exposed for providing access to alfresco and share. nginx forwards requests on 8080 (host port) to alfresco’s and share’s port 8080.

We can change the host port to any other port as well easily. E.g. 81 -> 8080 (Request will come on port 81 which nginx will forward to 8080)

postgres

5432

 

solr6

8083 -> 8983

8083 is host port and 8983 is container port. Alfresco uses 8983 to communicate with solr6. To access solr admin, administrators use 8083

 

Access via browser: http://localhost:8083/

 

transform-core-aio

8090-> 8090

Both host and container ports are same here.

Alfresco uses 8090 to communicate with transformation services.

We can use the port 8090 to access the transformation services via browser.

 

Access via browser: http://localhost:8090/

 

activemq

8161 -> 8161 # Web Console

Both host and container ports are same here.

port 8161 can be used for accessing the ‘Web Console’ via browser and alfresco would use the same port to communicate with activemq.

 

Access WebConsole via browser: http://localhost:8161/

 


The steps we are going to follow, are applicable to ACS 6.x , ACS7.1 and ACS 7.2

This post has been updated to match the latest ACS version (ACS 7.3) as well. 

Let's create some directories for keeping the DockerFile and required configs which will be used for re-building the updated images from OOTB images.

  • Create a directory 'configs-to-override' in the same directory where you have kept your 'docker-compose.yml' file.
  • Under 'configs-to-override' directory, create following directories:
    • Create 'alfresco' directory --> It will be used to keep 'DockerFile' for acs image
      • Create an empty 'DockerFile' file which we will use to put build instructions for 'alfresco' service
    • Create 'share' directory --> It will be used to keep 'DockerFile' for share image
      • Create an empty 'DockerFile' file which we will use to put build instructions for 'share' service
    • Create 'proxy' directory --> It will be used to keep 'DockerFile' and 'nginx.conf' file for nginx image
      • Create an empty 'DockerFile' file which we will use to put build instructions for 'proxy' service
      • Create an empty 'nginx.conf' file which we will use to put proxy configuration for services

Friday, November 15, 2019

Alfresco 6.x with SDK4.x and docker command cheatsheet


# Alfresco Content Services 6.x AIO Demo Project using SDK 4.x

This is a demo project. This project provides sample project structure built on SDK 4.x (latest 4.0.0) and also provides samples of content model, action, behaviors, scheduled jobs, webscripts, aikau pages, surf pages, etc.


# Prerequisite

##### Understand the concept:
`https://docs.alfresco.com/6.0/concepts/deploy-concepts.html`

##### Required Tools and deployment:
Before starting with SDK4.x and docker based project, Install docker on your development environment.
Follow instructions given here: `https://docs.docker.com/install/`
You may have to create an account on docker hub.

Go to: `https://hub.docker.com/signup?next=%2F%3Foverlay%3Donboarding` to create an account.

Visit here for more details: `https://docs.alfresco.com/6.0/concepts/deploy-prereqs.html`


# Getting started

Run with `./run.sh build_start` or `./run.bat build_start` and verify that it

 * Runs Alfresco Content Service (ACS)
 * Runs Alfresco Share
 * Runs Alfresco Search Service (ASS)
 * Runs PostgreSQL database
 * Deploys the JAR assembled modules

All the services of the project are now run as docker containers. The run script offers the next tasks:

 * `build_start`. Build the whole project, recreate the ACS and Share docker images, start the dockerised environment composed by ACS, Share, ASS and PostgreSQL and tail the logs of all the containers.
 * `build_start_with_supportTools`. Build the whole project with OOTBee support tools, recreate the ACS and Share docker images, start the dockerised environment composed by ACS, Share, ASS and PostgreSQL and tail the logs of all the containers. This is a custom target, not available with OOTB SDK
 * `build_start_it_supported`. Build the whole project including dependencies required for IT execution, recreate the ACS and Share docker images, start the dockerised environment composed by ACS, Share, ASS and PostgreSQL and tail the logs of all the containers.
 * `build_start_it_supported_with_supportTools`. Build the whole project including dependencies required for IT execution along with OOTBee support tools, recreate the ACS and Share docker images, start the dockerised environment composed by ACS, Share, ASS and PostgreSQL and tail the logs of all the containers. This is a custom target, not available with OOTB SDK
 * `start`. Start the dockerised environment without building the project and tail the logs of all the containers.
 * `stop`. Stop the dockerised environment.
 * `purge`. Stop the dockerised container and delete all the persistent data (docker volumes).
 * `tail`. Tail the logs of all the containers.
 * `reload_share`. Build the Share module, recreate the Share docker image and restart the Share container.
 * `reload_share_with_supportTools`. Build the Share module with OOTBee support tools, recreate the Share docker image and restart the Share container. This is a custom target, not available with OOTB SDK
 * `reload_acs`. Build the ACS module, recreate the ACS docker image and restart the ACS container.
 * `reload_acs_with_supportTools`. Build the ACS module with OOTBee support tools, recreate the ACS docker image and restart the ACS container. This is a custom target, not available with OOTB SDK
 * `build_test`. Build the whole project, recreate the ACS and Share docker images, start the dockerised environment, execute the integration tests from the
 `integration-tests` module and stop the environment.
 * `test`. Execute the integration tests (the environment must be already started).

# How to run SDK's integration tests

Running the integration tests of a project generated from the Alfresco SDK 4.0 archetypes is pretty easy. Let's distinguish different cases of executing the
integration tests.

## Command line

If you want to run the integration tests from the command line you'll have to use the utility scripts provided by all the projects generated from the
archetypes. These are `run.sh` if you're on Unix systems or `run.bat` if you're on Windows systems.

If you want to spin up a new dockerised environment with ACS, run the integration tests and stop that environment, you'll use the `build_test` goal:

```
$ ./run.bat build_test
```

If you want all your previous data in the docker environment to be wiped out before the execution of the integration tests, remember to call the `purge` goal
before the `build_test` goal:

```
$ ./run.bat purge
$ ./run.bat build_test
```

The `build_test` goal will execute the next list of tasks:
* Stop any previous execution of the dockerised environment.
* Compile all the source code.
* Rebuild the custom Docker images of the project.
* Start a new dockerised environment.
* Execute the integration tests.
* Show the logs of the docker containers during the tests execution.
* Stop the dockerised environment.

If your dockerised environment is already started and you simply want to execute the integration tests against that existing ACS instance, then use the `test`
goal:

```
$ ./run.bat test
```

### Configuring a custom ACS endpoint location

If you want to run your integration tests against an ACS instance not exposed in `http://localhost:8080/alfresco` you'll need to modify a maven property
before executing the tests.

The maven property for the test ACS instance endpoint location is `acs.endpoint.path` and you can configure it in the `pom.xml` file in the root folder of your
project:

```
    <properties>
        ...
        <test.acs.endpoint.path>http://192.168.1.11:8080/alfresco</test.acs.endpoint.path>
        ..
    </properties>
```

This parameter is **specially important** if you're running your dockerised environment using [Docker Toolbox](https://docs.docker.com/toolbox/) instead of
[Docker Desktop](https://www.docker.com/products/docker-desktop). If that is the case, then the Docker container exposed ports are not mapped in the hosted
machine as `localhost` but as an assigned IP address (i.e. `192.168.1.11`).

## Eclipse IDE

If your project is available in Eclipse, you can easily run one or more of the integration tests directly from your IDE.

To run the integration tests:

1. In order to properly execute the integration tests the dockerised environment must be already up and running with IT support. So, before executing the tests
you must run the `build_start_it_supported` or the `start` goal of the `run` script.

2. Open the project using the IDE.

3. Select the classes for the integration tests (either one, some, or the whole package).

4. Right click and select `Run As ...`, then click `JUnit Test`.

Once the tests have completed (typically, after a few seconds), the results are presented.

## IntelliJ IDEA IDE

If your project is available in IntelliJ IDEA, you can easily run one or more of the integration tests directly from your IDE.

To run the integration tests:

1. In order to properly execute the integration tests the dockerised environment must be already up and running with IT support. So, before executing the tests
you must run the `build_start_it_supported` or the `start` goal of the `run` script.

2. Open the project using the IDE.

3. Select the classes for the integration tests (either one, some, or the whole package).

4. Right click and select `Run Tests`.

Once the tests have completed (typically, after a few seconds), the results are presented.
When using an IDE, the source code related to the integration tests is the one deployed directly on the platform side.

Debugging in Eclipse IDE

See: Remote debugging using Eclipse

Debugging in IntelliJ IDE

SeeRemote debugging using IntelliJ

# Some common docker commands:

- `docker -v (prints the version of docker)`
- `docker-compose -v (prints the version of docker compose)`
- `docker container ls (Lists are running containers)`
- `docker container stop <CONTAINER_ID> (To stop a container, where container_id is the id you would get when you execute above command)`
- `docker run (Runs a command in a new container)`
- `docker start (Starts one or more stopped containers)`
- `docker stop (Stops one or more running containers)`
- `docker build (Builds an image form a Docker file)`
- `docker pull (Pulls an image or a repository from a registry)`
- `docker push (Pushes an image or a repository to a registry)`
- `docker export (Exports a container’s filesystem as a tar archive)`
- `docker exec (Runs a command in a run-time container)`
- `docker search (Searches the Docker Hub for images)`
- `docker attach (Attaches to a running container)`
- `docker commit (Creates a new image from a container’s changes)`
- `docker restart (Restart one or more containers, e.g. docker restart <CONTAINER_NAME> or docker restart <CONTAINER_ID>)`


# Some common errors you may see while using docker:

#### Sometimes when you quit the docker service or stop it manually and start again, it may not login to docker hub repo properly and you may see this error: unauthorized: incorrect username or password

In my case i got this error: Pulling acs6-aio-demo-project-postgres (postgres:9.6)...
ERROR: Get https://registry-1.docker.io/v2/library/postgres/manifests/9.6: unauthorized: incorrect username or password

#####Remedy for the above error:
Try logout and login from docker using powershell. Refer this issue log for details: https://github.com/docker/hub-feedback/issues/1098

Login with your Docker ID. If you don't have a Docker ID, head over to https://hub.docker.com to create one.

- `docker logout`
- `docker login (Remember to use userId instead of your emailId)`

PS C:\EclipseWorkspace\acs6-aio-demo-project> docker logout
Not logged in to https://index.docker.io/v1/
PS C:\EclipseWorkspace\acs6-aio-demo-project> docker login
Authenticating with existing credentials...
Stored credentials invalid or expired
Username (xyz@gmail.com): xyz
Password:
Login Succeeded


#### Sometimes you may see error as: driver failed programming external connectivity on endpoint

In my case i got this error:

ERROR: for docker_acs6-aio-demo-project-postgres_1  Cannot start service acs6-aio-demo-project-postgres: driver failed programming external connectivity on endpoint docker_acs6-aio-demo-project-postgres_1 (3850d6992b6f9f6abb99b7494f0a605Creating docker_acs6-aio-demo-project-ass_1      ... error

Creating docker_acs6-aio-demo-project-share_1    ... error
0ef77510b6513fc5f2): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8983:tcp:172.20.0.3:8983: input/output error

ERROR: for docker_acs6-aio-demo-project-share_1  Cannot start service acs6-aio-demo-project-share: driver failed programming external connectivity on endpoint docker_acs6-aio-demo-project-share_1 (1f4a7e30e906106ff606b4bc9432bb9184c61b411dbdbe84f4c0b55b940fa1d4): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9898:tcp:172.20.0.4:8888: input/output error

ERROR: for acs6-aio-demo-project-postgres  Cannot start service acs6-aio-demo-project-postgres: driver failed programming external connectivity on endpoint docker_acs6-aio-demo-project-postgres_1 (3850d6992b6f9f6abb99b7494f0a6050bcf147f662adb78db6d25f77ee1cbdd4): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:5432:tcp:172.20.0.2:5432: input/output error

ERROR: for acs6-aio-demo-project-ass  Cannot start service acs6-aio-demo-project-ass: driver failed programming external connectivity on endpoint docker_acs6-aio-demo-project-ass_1 (606d3bede53e3fb8fadf431fdc983db01fba680377fed00ef77510b6513fc5f2): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8983:tcp:172.20.0.3:8983: input/output error

ERROR: for acs6-aio-demo-project-share  Cannot start service acs6-aio-demo-project-share: driver failed programming external connectivity on endpoint docker_acs6-aio-demo-project-share_1 (1f4a7e30e906106ff606b4bc9432bb9184c61b411dbdbe84f4c0b55b940fa1d4): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9898:tcp:172.20.0.4:8888: input/output error
ERROR: Encountered errors while bringing up the project.

##### Remedy for the above error:
- ` Restart docker for desktop and try the operation again`

If it doesn't work, try finding if something is running on ports displayed in the error above.

Run following command: netstat -anon | findstr 8983
This will return the list of process which may be using the port number : 8983

You will see following result:
TCP    0.0.0.0:8983           0.0.0.0:0              LISTENING       9500
TCP    [::]:8983              [::]:0                 LISTENING       9500

Here 9500 is the process id, use the below command to kill this process:

taskkill /f /pid 9500

After killing the process, restart the docker for desktop again.

#### Docker images related commands:

- `docker images (For searching an image in the Docker Hub)`

##### Docker provides a single command that will clean up any resources — images, containers, volumes, and networks — that are dangling (not associated with a container):

- `docker system prune`

##### To additionally remove any stopped containers and all unused images (not just dangling images), add the -a flag to the command:

- `docker system prune -a`

##### Remove one or more specific images:

Use the docker images command with the -a flag to locate the ID of the images you want to remove. This will show you every image, including intermediate image layers. When you've located the images you want to delete, you can pass their ID or tag to docker rmi:

List:

- `docker images -a`

Remove:

- `docker rmi Image Image`


##### Remove dangling images:

Docker images consist of multiple layers. Dangling images are layers that have no relationship to any tagged images. They no longer serve a purpose and consume disk space. They can be located by adding the filter flag, -f with a value of dangling=true to the docker images command. When you're sure you want to delete them, you can use the docker images purge command:

Note: If you build an image without tagging it, the image will appear on the list of dangling images because it has no association with a tagged image. You can avoid this situation by providing a tag when you build, and you can retroactively tag an images with the docker tag command.

List:

- `docker images -f dangling=true`

Remove:

- `docker images purge`


##### Removing images according to a pattern
You can find all the images that match a pattern using a combination of docker images and grep. Once you're satisfied, you can delete them by using awk to pass the IDs to docker rmi. Note that these utilities are not supplied by Docker and are not necessarily available on all systems:

List:

- `docker images -a |  grep "pattern"`

Remove:

- `docker images -a | grep "pattern" | awk '{print $3}' | xargs docker rmi`

Remove all images
All the Docker images on a system can be listed by adding -a to the docker images command. Once you're sure you want to delete them all, you can add the -q flag to pass the Image ID to docker rmi:

List:

- `docker images -a`

Remove:

- `docker rmi $(docker images -a -q)`

##### Removing Containers

Remove one or more specific containers
Use the docker ps command with the -a flag to locate the name or ID of the containers you want to remove:

List:

- `docker ps -a`

Remove:

- `docker rm ID_or_Name ID_or_Name`

##### Remove a container upon exit

If you know when you’re creating a container that you won’t want to keep it around once you’re done, you can run docker run --rm to automatically delete it when it exits.

Run and Remove:

- `docker run --rm image_name`

##### Remove all exited containers

You can locate containers using docker ps -a and filter them by their status: created, restarting, running, paused, or exited. To review the list of exited containers, use the -f flag to filter based on status. When you've verified you want to remove those containers, using -q to pass the IDs to the docker rm command.

List:

- `docker ps -a -f status=exited`

Remove:

- `docker rm $(docker ps -a -f status=exited -q)`

##### Remove containers using more than one filter

Docker filters can be combined by repeating the filter flag with an additional value. This results in a list of containers that meet either condition. For example, if you want to delete all containers marked as either Created (a state which can result when you run a container with an invalid command) or Exited, you can use two filters:

List:

- `docker ps -a -f status=exited -f status=created`

Remove:

- `docker rm $(docker ps -a -f status=exited -f status=created -q)`

##### Remove containers according to a pattern

You can find all the containers that match a pattern using a combination of docker ps and grep. When you're satisfied that you have the list you want to delete, you can use awk and xargs to supply the ID to docker rmi. Note that these utilities are not supplied by Docker and not necessarily available on all systems:

List:

- `docker ps -a |  grep "pattern”`

Remove:

- `docker ps -a | grep "pattern" | awk '{print $3}' | xargs docker rmi`

##### Stop and remove all containers
You can review the containers on your system with docker ps. Adding the -a flag will show all containers. When you're sure you want to delete them, you can add the -q flag to supply the IDs to the docker stop and docker rm commands:

List:

- `docker ps -a`

Remove:

- `docker stop $(docker ps -a -q)`
- `docker rm $(docker ps -a -q)`

##### Removing Volumes

Remove one or more specific volumes - Docker 1.9 and later
Use the docker volume ls command to locate the volume name or names you wish to delete. Then you can remove one or more volumes with the docker volume rm command:

List:

- `docker volume ls`

Remove:

- `docker volume rm volume_name volume_name`

##### Remove dangling volumes - Docker 1.9 and later
Since the point of volumes is to exist independent from containers, when a container is removed, a volume is not automatically removed at the same time. When a volume exists and is no longer connected to any containers, it's called a dangling volume. To locate them to confirm you want to remove them, you can use the docker volume ls command with a filter to limit the results to dangling volumes. When you're satisfied with the list, you can remove them all with docker volume prune:

List:

- `docker volume ls -f dangling=true`

Remove:

- `docker volume prune`

##### Remove a container and its volume
If you created an unnamed volume, it can be deleted at the same time as the container with the -v flag. Note that this only works with unnamed volumes. When the container is successfully removed, its ID is displayed. Note that no reference is made to the removal of the volume. If it is unnamed, it is silently removed from the system. If it is named, it silently stays present.

Remove:

- `docker rm -v container_name`

#### To view all docker volume data use this command: 

\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes

#### You can also use this command to inspect the volume: 

docker inspect <container_id>

#### Docker volumes on windows can also be found here: 

'C:\Users\Username\AppData\Local\Docker\wsl', where username is your username that you use to login to your computer. For example: C:\Users\abhinav\AppData\Local\Docker\wsl

##### Get container name or short container id:

- `docker ps`

##### Get full container id:

- `docker inspect -f '{{.Id}}' CONTAINER_ID`

##### Copy file:

- `docker cp localFile FULLCONTAINER_ID:pathOnContainer`

Example:
- `docker cp C:\custom-log4j.properties ac23e51fd0b5c4cc11a12133ceea16d603e7f105d8d39873c75d7cfdd5942e40:/usr/local/tomcat/shared/classes/alfresco/extension`

##### SSH Into a docker image:

- `docker exec -t -i <containerName/Id> /bin/bash`

docker exec Options:
  -d, --detach               Detached mode: run command in the background
      --detach-keys string   Override the key sequence for detaching a container

  -e, --env list             Set environment variables

  -i, --interactive          Keep STDIN open even if not attached
      --privileged           Give extended privileges to the command

  -t, --tty                  Allocate a pseudo-TTY

  -u, --user string          Username or UID (format:
                             <name|uid>[:<group|gid>])

  -w, --workdir string       Working directory inside the container


Example (SSH into alfresco container):

- `docker exec -t -i docker_acs6-aio-demo-project-acs_1 /bin/bash`

Example (SSH into search service container):

- `docker exec -t -i docker_acs6-aio-demo-project-ass_1 /bin/bash`

Example (SSH into share container):

- `docker exec -t -i docker_acs6-aio-demo-project-share_1 /bin/bash`

##### Get list of docker machines:

- `docker-machine ls`

##### Create new docker machines:

- `docker-machine create <aName>`

#### Find MobyLinux docker VM settings (windows 10)

- `VM settings file can be found at: C:\Users\<userID>\AppData\Roaming\Docker`

- `Here userID is the user which you have used to login to windows. e.g.: C:\Users\abhinavmishra\AppData\Roaming\Docker\settings.json`

- `Content of settings.json will be something like this:`

  {
    "settingsVersion":  1,
    "autoStart":  false,
    "checkForUpdates":  false,
    "analyticsEnabled":  false,
    "displayedWelcomeWhale":  true,
    "displayed14393Deprecation":  false,
    "displayRestartDialog":  true,
    "displaySwitchWinLinContainers":  true,
    "latestBannerKey":  "",
    "debug":  false,
    "memoryMiB":  2048,
    "swapMiB":  1024,
    "cpus":  2,
    "diskPath":  null,
    "diskSizeMiB":  64000000000,
    "networkCIDR":  "10.1.85.0/24",
    "proxyHttpMode":  false,
    "overrideProxyHttp":  "",
    "overrideProxyHttps":  "",
    "overrideProxyExclude":  "",
    "useDnsForwarder":  true,
    "dns":  "8.8.8.8",
    "kubernetesEnabled":  false,
    "showKubernetesSystemContainers":  false,
    "kubernetesInitialInstallPerformed":  false,
    "cliConfigCreationDate":  null,
    "linuxDaemonConfigCreationDate":  null,
    "windowsDaemonConfigCreationDate":  null,
    "versionPack":  "default",
    "sharedDrives":  {

                     },
    "executableDate":  "",
    "useWindowsContainers":  false,
    "swarmFederationExplicitlyLoggedOut":  false,
    "activeOrganizationName":  null,
    "exposeDockerAPIOnTCP2375":  false
}

#### Recreate MobyLinux VM provided by docker for desktop (windows 10)

Follow the steps below:

1- `Exit the application and Quit docker desktop (you can do it by clicking quitting from windows system tray)`

2- `Go to start menu > run > services.msc`

3- `Find service 'com.docker.service' (Docker Desktop Service) and stop it`

4- `Launch the windows power shell, "Run as Administrator"`

5- `Set the location as: "C:\Program Files\Docker\Docker\resources". This is the place where all docker related resources are places and the ps scripts`

    e.g. PS C:\WINDOWS\system32> Set-Location "C:\Program Files\Docker\Docker\resources"

Note: Do not use "cd" to navigate to C:\Program Files\Docker\Docker\resources. Follow the exact step as given above.

6- `Run following command: .\MobyLinux.ps1 -Destroy`

    - `If you get an error saying: 'MobyLinux.ps1 cannot be loaded because the execution of scripts is disabled on this system' then try running following command:
 
      `PS C:\Program Files\Docker\Docker\resources> Set-ExecutionPolicy RemoteSigned`
 
    - `See here for more details: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-6`
    - `Try running '.\MobyLinux.ps1 -Destroy' command again`
 
e.g.:
PS C:\Program Files\Docker\Docker\resources> .\MobyLinux.ps1 -Destroy

Script started at 16:56:04.455

Modules loaded at 16:56:05.780

VM MobyLinuxVM is stopped

Destroying Switch DockerNAT...

Removing VM MobyLinuxVM...

Delete VHD C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\MobyLinuxVM.vhdx
 
7- `Check that the VHDX file (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\MobyLinuxVM.vhdx) is removed`

8- `Once VHDX file is deleted, then re-create it by running following command: '.\MobyLinux.ps1 -Create'`

e.g.:
PS C:\Program Files\Docker\Docker\resources> .\MobyLinux.ps1 -Create

Script started at 16:58:58.218

Modules loaded at 16:58:58.246

Creating Switch: DockerNAT...

Switch created.

Set IP address on switch

Creating VM MobyLinuxVM...

Setting CPUs to 2 and Memory to 2048 MB

Creating dynamic VHD: C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\MobyLinuxVM.vhdx

Attach VHD C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\MobyLinuxVM.vhdx

Connect Internal Switch DockerNAT

Attach DVD .\docker-for-win.iso

Disabled Guest Service Interface

Enabled Heartbeat

Disabled Key-Value Pair Exchange

Enabled Shutdown

Enabled Time Synchronization

Disabled VSS

VM created.


9- `New VM will be created again`


#### Change the VM path from 'C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\' to a different location

Follow the steps below:

1- `Open notepad and copy paste below power shell code into it, name it as: CustomMobyLinux.ps`


$CustomLinxImageStoragePath = "D:\DockerStorage\VMs"

##### Make sure that com.docker.service is Stopped and 'Docker For Windows.exe' and 'dockerd.exe' are not running
try {
    $DockerService = Get-Service com.docker.service -ErrorAction Stop

    if ($DockerService.Status -ne "Stopped") {
        $DockerService | Stop-Service -Force
    }
}
catch {
    Write-Error $_
    $global:FunctionResult = "1"
    return
}

try {
    $DockerForWindowsProcess = Get-Process "Docker For Windows" -ErrorAction SilentlyContinue
    if ($DockerForWindowsProcess) {
        $DockerForWindowsProcess | Stop-Process -Force
    }

    $DockerDProcess = Get-Process "dockerd" -ErrorAction SilentlyContinue
    if ($DockerDProcess) {
        $DockerDProcess | Stop-Process -Force
    }
}
catch {
    Write-Error $_
    $global:FunctionResult = "1"
    return
}

######## Make sure the MobyLinuxVM is Off
$MobyLinuxVMInfo = Get-VM -Name MobyLinuxVM

if ($MobyLinuxVMInfo.State -ne "Off") {
    try {
        Stop-VM -VMName MobyLinuxVM -TurnOff -Confirm:$False -Force -ErrorAction Stop
    }
    catch {
        Write-Error $_
        $global:FunctionResult = "1"
        return
    }
}

$DockerSettings = Get-Content "$env:APPDATA\Docker\settings.json" | ConvertFrom-Json
$DockerSettings.MobyVhdPathOverride = "$CustomLinuxImageStoragePath\MobyLinuxVM.vhdx"
$DockerSettings | ConvertTo-Json | Out-File "$env:APPDATA\Docker\settings.json"

######## Turn On Docker Again
try {
    $DockerService = Get-Service com.docker.service -ErrorAction Stop

    if ($DockerService.Status -eq "Stopped") {
        $DockerService | Start-Service
        Write-Host "Sleeping for 30 seconds to give the com.docker.service service time to become ready..."
        Start-Sleep -Seconds 30
    }

    $MobyLinuxVMInfo = Get-VM -Name MobyLinuxVM
    if ($MobyLinuxVMInfo.State -ne "Running") {
        Write-Host "Manually starting MobyLinuxVM..."
        Start-VM -Name MobyLinuxVM
    }

    & "C:\Program Files\Docker\Docker\Docker For Windows.exe"
}
catch {
    Write-Error $_
    $global:FunctionResult = "1"
    return
}


2- `Save the file in this directory: C:\Program Files\Docker\Docker\resources`

3- `Exit the application and Quit docker desktop (you can do it by clicking quitting from windows system tray)`

4- `Go to start menu > run > services.msc`

5- `Find service 'com.docker.service' (Docker Desktop Service) and stop it`

6- `Launch the windows power shell, "Run as Administrator"`

7- `Set the location as: "C:\Program Files\Docker\Docker\resources". This is the place where all docker related resources are places and the ps scripts`

    e.g. PS C:\WINDOWS\system32> Set-Location "C:\Program Files\Docker\Docker\resources"

Note: Do not use "cd" to navigate to C:\Program Files\Docker\Docker\resources. Follow the exact step as given above.

8- `Run following command: .\CustomMobyLinux.ps`

    - `If you get an error: 'CustomMobyLinux.ps cannot be loaded because the execution of scripts is disabled on this system' then try running following command:
 
      `PS C:\Program Files\Docker\Docker\resources> Set-ExecutionPolicy RemoteSigned`
 
    - `See here for more details: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-6`
    - `Try running '.\CustomMobyLinux.ps' command again`

 
# Some common docker compose commands:

TO Get the version and help related to docker-compose use below commands:

  - `docker-compose help               (Get help on a command)`
  - `docker-compose version            (Show the Docker-Compose version information)`

Before executing the commands, Navigate to the folder where the docker-compose.yml file is located. Once you located the file, execute the command in below given format:
 
   docker-compose -f <PATH>/docker-compose.yml <INSTRUCTION>
 
   For example: docker-compose -f ./docker-compose.yml up
   

  - `docker-compose -f ./docker-compose.yml up                 (Create and start containers)`
  - `docker-compose -f ./docker-compose.yml down               (Stop and remove containers, networks, images, and volumes)`
  - `docker-compose -f ./docker-compose.yml start              (Start services)`
  - `docker-compose -f ./docker-compose.yml stop               (Stop services)`
  - `docker-compose -f ./docker-compose.yml restart            (Restart services)`
  - `docker-compose -f ./docker-compose.yml top                (Display the running processes)`
  - `docker-compose -f ./docker-compose.yml build              (Build or rebuild services)`
  - `docker-compose -f ./docker-compose.yml bundle             (Generate a Docker bundle from the Compose file)`
  - `docker-compose -f ./docker-compose.yml config             (Validate and view the Compose file)`
  - `docker-compose -f ./docker-compose.yml create             (Create services)`
  - `docker-compose -f ./docker-compose.yml events             (Receive real time events from containers)`
  - `docker-compose -f ./docker-compose.yml exec               (Execute a command in a running container)`
  - `docker-compose -f ./docker-compose.yml images             (List images)`
  - `docker-compose -f ./docker-compose.yml kill               (Kill containers)`
  - `docker-compose -f ./docker-compose.yml logs               (View output from containers)`
  - `docker-compose -f ./docker-compose.yml pause              (Pause services)`
  - `docker-compose -f ./docker-compose.yml unpause            (Unpause services)`
  - `docker-compose -f ./docker-compose.yml port               (Print the public port for a port binding)`
  - `docker-compose -f ./docker-compose.yml ps                 (List containers)`
  - `docker-compose -f ./docker-compose.yml pull               (Pull service images)`
  - `docker-compose -f ./docker-compose.yml push               (Push service images)`
  - `docker-compose -f ./docker-compose.yml rm                 (Remove stopped containers)`
  - `docker-compose -f ./docker-compose.yml run                (Run a one-off command)`
  - `docker-compose -f ./docker-compose.yml scale              (Set number of containers for a service)`

  Here -f ./docker-compose.yml is an optional parameter if you are executing the commands where docker-compose.yml file is located. Else you must provide the file with exact path.
  For example: docker-compose -f c:\Alfresco\6.1\docker-compose.yml up

##### You can also specify multiple docker-compose.yml files:

 - `docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db`

   Here backup_db is a service defined in your docker-compose.yml file.

##### You can also execute multiple instructions at same time:

  - `docker-compose down && docker-compose build --no-cache && docker-compose up`
 
   This command will turn down all the services, build and turn them up.
 
  - `docker-compose down acs6-aio-demo-project-acs && docker-compose build acs6-aio-demo-project-acs --no-cache && docker-compose up acs6-aio-demo-project-acs`
 
   This command will turn down acs6-aio-demo-project-acs services, build and turn them up.
 

### Removing/Uninstalling the ACS
When we are finished with the test, PoC, etc. then we can remove the ACS from the computer in an easy way with the following Docker Compose command (this is not necessary if you did docker-compose down):


- `docker-compose rm`

Going to remove docker_acs6-aio-demo-project-share_1, docker_acs6-aio-demo-project-acs_1, docker_acs6-aio-demo-project-postgres_1, docker_acs6-aio-demo-project-ass_1

Are you sure? [yN] y

Removing docker_acs6-aio-demo-project-share_1 ... done

Removing docker_acs6-aio-demo-project-acs_1 ... done

Removing docker_acs6-aio-demo-project-postgres_1    ... done

Removing docker_acs6-aio-demo-project-ass_1    ... done

##### List the running and stopped containers to make sure they are gone:

$ docker container ls -a

CONTAINER ID        IMAGE              COMMAND CREATED           STATUS PORTS        NAMES