Setup Guide

In this guide, you learn how to setup the development environment of Artemis. Artemis is based on JHipster, i.e. Spring Boot development on the application server using Java 17, and TypeScript development on the application client in the browser using Angular. To get an overview of the used technology, have a look at the JHipster Technology stack and other tutorials on the JHipster homepage.

You can find tutorials how to setup JHipster in an IDE (IntelliJ IDEA Ultimate is recommended) on https://jhipster.github.io/configuring-ide. Note that the Community Edition of IntelliJ IDEA does not provide Spring Boot support (see the comparison matrix). Before you can build Artemis, you must install and configure the following dependencies/tools on your machine:

  1. Java JDK: We use Java (JDK 17) to develop and run the Artemis application server, which is based on Spring Boot.

  2. MySQL Database Server 8, or PostgreSQL: Artemis uses Hibernate to store entities in an SQL database and Liquibase to automatically apply schema transformations when updating Artemis.

  3. Node.js: We use Node LTS (>=18.14.0 < 19) to compile and run the client Angular application. Depending on your system, you can install Node either from source or as a pre-packaged bundle.

  4. Npm: We use Npm (>=9.4.0) to manage client side dependencies. Npm is typically bundled with Node.js, but can also be installed separately.

  5. ( Graphviz: We use Graphviz to generate graphs within exercise task descriptions. It’s not necessary for a successful build, but it’s necessary for production setups as otherwise errors will show up during runtime. )

  6. ( A version control and build system is necessary for the programming exercise feature of Artemis. There are multiple stacks available for the integration with Artemis:



Database Setup

The required Artemis schema will be created / updated automatically at startup time of the server application. Artemis supports MySQL and PostgreSQL databases.

MySQL Setup

Download and install the MySQL Community Server (8.0.x).

You have to run a database on your local machine to be able to start Artemis.

We recommend starting the database in a docker container. You can run the MySQL Database Server using e.g. docker compose -f docker/mysql.yml up.

If you run your own MySQL server, make sure to specify the default character-set as utf8mb4 and the default collation as utf8mb4_unicode_ci. You can achieve this e.g. by using a my.cnf file in the location /etc.

[client]
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4
[mysqld]
character-set-client-handshake = TRUE
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci

Make sure the configuration file is used by MySQL when you start the server. You can find more information on https://dev.mysql.com/doc/refman/8.0/en/option-files.html

Users for MySQL

For the development environment, the default MySQL user is ‘root’ with an empty password.
(In case you want to use a different password, make sure to change the value in application-local.yml (spring > datasource > password) and in liquibase.gradle (within the ‘liquibaseCommand’ as argument password)).

Set empty root password for MySQL 8

If you have problems connecting to the MySQL 8 database using an empty root password, you can try the following command to reset the root password to an empty password:

mysql -u root --execute "ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY ''";

Warning

Empty root passwords should only be used in a development environment. The root password for a production environment must never be empty.

PostgreSQL Setup

No special PostgreSQL settings are required. You can either use your package manager’s version, or set it up using a container. An example Docker Compose setup based on the official container image is provided in src/main/docker/postgres.yml.

When setting up the Artemis server, the following values need to be added/updated in the server configuration (see setup steps below) to connect to PostgreSQL instead of MySQL:

spring:
    datasource:
        url: "jdbc:postgresql://<IP/HOSTNAME of PostgreSQL database host>/Artemis?ssl=false"
        username: <YOUR_DB_USER>
        password: <YOUR_DB_PASSWORD>
    jpa:
        database-platform: org.hibernate.dialect.PostgreSQL10Dialect
        database: POSTGRESQL

Note

This example assumes that the database is called Artemis. You might have to update this part of spring.datasource.url as well if you chose a different name.


Server Setup

To start the Artemis application server from the development environment, first import the project into IntelliJ and then make sure to install the Spring Boot plugins to run the main class de.tum.in.www1.artemis.ArtemisApp. Before the application runs, you have to change some configuration options. You can change the options directly in the file application-artemis.yml in the folder src/main/resources/config. However, you have to be careful that you do not accidentally commit your password. Therefore, we strongly recommend, to create a new file application-local.yml in the folder src/main/resources/config which is ignored by default. You can override the following configuration options in this file.

artemis:
    repo-clone-path: ./repos/
    legal-path: ./legal/
    repo-download-clone-path: ./repos-download/
    bcrypt-salt-rounds: 11   # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                             # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
    user-management:
        use-external: true
        password-reset:
             credential-provider: <provider> # The credential provider which users can log in though (e.g. TUMonline)
             links: # The password reset links for different languages
                 en: '<link>'
                 de: '<link>'
        external:
            url: https://jira.ase.in.tum.de
            user: <username>    # e.g. ga12abc
            password: <password>
            admin-group-name: tumuser
        ldap:
            url: <url>
            user-dn: <user-dn>
            password: <password>
            base: <base>
    version-control:
        url: https://bitbucket.ase.in.tum.de
        user: <username>    # e.g. ga12abc
        password: <password>
        token: <token>                 # VCS API token giving Artemis full Admin access. Not needed for Bamboo+Bitbucket
    continuous-integration:
        url: https://bamboo.ase.in.tum.de
        user: <username>    # e.g. ga12abc
        token: <token>      # Enter a valid token generated by bamboo or leave this empty to use the fallback authentication user + password
        password: <password>
        # Some CI systems, like Jenkins, offer a specific token that gets checked against any incoming notifications
        # from a VCS trying to trigger a build plan. Only if the notification request contains the correct token, the plan
        # is triggered. This can be seen as an alternative to sending an authenticated request to a REST API and then
        # triggering the plan.
        # In the case of Artemis, this is only really needed for the Jenkins + GitLab setup, since the GitLab plugin in
        # Jenkins only allows triggering the Jenkins jobs using such a token. Furthermore, in this case, the value of the
        # hudson.util.Secret is stored in the build plan, so you also have to specify this encrypted string here and NOT the actual token value itself!
        # You can get this by GETting any job.xml for a job with an activated GitLab step and your token value of choice.
        secret-push-token: <token hash>
        # Key of the saved credentials for the VCS service
        # Bamboo: not needed
        # Jenkins: You have to specify the key from the credentials page in Jenkins under which the user and
        #          password for the VCS are stored
        vcs-credentials: <credentials key>
        # Key of the credentials for the Artemis notification token
        # Bamboo: not needed
        # Jenkins: You have to specify the key from the credentials page in Jenkins under which the notification token is stored
        notification-token: <credentials key>
        # The actual value of the notification token to check against in Artemis. This is the token that gets send with
        # every request the CI system makes to Artemis containing a new result after a build.
        # Bamboo: The token value you use for the Server Notification Plugin
        # Jenkins: The token value you use for the Server Notification Plugin and is stored under the notification-token credential above
        authentication-token: <token>
    git:
        name: Artemis
        email: artemis@in.tum.de
    athena:
        url: http://localhost:5000
        secret: abcdef12345

Change all entries with <...> with proper values, e.g. your TUM Online account credentials to connect to the given instances of JIRA, Bitbucket and Bamboo. Alternatively, you can connect to your local JIRA, Bitbucket and Bamboo instances. It’s not necessary to fill all the fields, most of them can be left blank. Note that there is additional information about the setup for programming exercises provided:

Note

Be careful that you do not commit changes to application-artemis.yml. To avoid this, follow the best practice when configuring your local development environment:

  1. Create a file named application-local.yml under src/main/resources/config.

  2. Copy the contents of application-artemis.yml into the new file.

  3. Update configuration values in application-local.yml.

By default, changes to application-local.yml will be ignored by git so you don’t accidentally share your credentials or other local configuration options. The run configurations contain a profile local at the end to make sure the application-local.yml is considered. You can create your own configuration files application-<name>.yml and then activate the profile <name> in the run configuration if you need additional customizations.

If you use a password, you need to adapt it in gradle/liquibase.gradle.

Run the server via a service configuration

This setup is recommended for production instances as it registers Artemis as a service and e.g. enables auto-restarting of Artemis after the VM running Artemis has been restarted. Alternatively, you could look at the section below about deploying artemis as docker container. For development setups, see the other guides below.

This is a service file that works on Debian/Ubuntu (using systemd):

[Unit]
Description=Artemis
After=syslog.target
[Service]
User=artemis
WorkingDirectory=/opt/artemis
ExecStart=/usr/bin/java \
  -Djdk.tls.ephemeralDHKeySize=2048 \
  -DLC_CTYPE=UTF-8 \
  -Dfile.encoding=UTF-8 \
  -Dsun.jnu.encoding=UTF-8 \
  -Djava.security.egd=file:/dev/./urandom \
  -Xmx2048m \
  --add-modules java.se \
  --add-exports java.base/jdk.internal.ref=ALL-UNNAMED \
  --add-exports java.naming/com.sun.jndi.ldap=ALL-UNNAMED \
  --add-opens java.base/java.lang=ALL-UNNAMED \
  --add-opens java.base/java.nio=ALL-UNNAMED \
  --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
  --add-opens java.management/sun.management=ALL-UNNAMED \
  --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED \
  -jar artemis.war \
  --spring.profiles.active=prod,bamboo,bitbucket,jira,ldap,scheduling,openapi
SuccessExitStatus=143
StandardOutput=/opt/artemis/artemis.log
StandardError=inherit
[Install]
WantedBy=multi-user.target

The following parts might also be useful for other (production) setups, even if this service file is not used:

  • -Djava.security.egd=file:/dev/./urandom: This is required if repositories are cloned via SSH from the VCS.

    The default (pseudo-)random-generator /dev/random is blocking which results in very bad performance when using SSH due to lack of entropy.

The file should be placed at /etc/systemd/system/artemis.service and after running sudo systemctl daemon-reload, you can start the service using sudo systemctl start artemis.

You can stop the service using sudo service artemis stop and restart it using sudo service artemis restart.

Logs can be fetched using sudo journalctl -u artemis -f -n 200.

Run the server via Docker

Artemis provides a Docker image named ghcr.io/ls1intum/artemis:<TAG/VERSION>.
The current develop branch is provided by the tag develop.
The latest release is provided by the tag latest.
Specific releases like 5.7.1 can be retrieved as ghcr.io/ls1intum/artemis:5.7.1.
Branches tied to a pull request can be obtained by using the tag PR-<PR NUMBER>.

Dockerfile

You can find the latest Artemis Dockerfile at docker/artemis/Dockerfile.

  • The Dockerfile has multiple stages: A builder stage, building the .war file, an optional external_builder stage to import a pre-built .war file, a war_file stage to choose between the builder stages via build argument and a runtime stage with minimal dependencies just for running artemis.

  • The Dockerfile defines three Docker volumes (at the specified paths inside the container):

    • /opt/artemis/config:

      This can be used to store additional configurations of Artemis in YAML files. The usage is optional, and we recommend using the environment files for overriding your custom configurations instead of using src/main/resources/application-local.yml as such an additional configuration file. The other configurations like src/main/resources/application.yml, … are built into the .war file and therefore are not needed in this directory.

      Tip

      Instead of mounting this config directory, you can also use environment variables for the configuration as defined by the Spring relaxed binding. You can either place those environment variables directly in the environment section, or create an .env-file. When starting an Artemis container directly with the Docker-CLI, an .env-file can also be given via the --env-file option.

      To ease the transition of an existing set of YAML configuration files into the environment variable style, a helper script can be used.

    • /opt/artemis/data:

      This directory should be used for any data (e.g., local clone of repositories). This is preconfigured in the docker Java Spring profile (which sets the following values: artemis.repo-clone-path, artemis.repo-download-clone-path, artemis.course-archives-path, artemis.submission-export-path artemis.legal-path, and artemis.file-upload-path).

    • /opt/artemis/public/content:

      This directory will be used for branding. You can specify a favicon here.

  • The Dockerfile assumes that the mounted volumes are located on a file system with the following locale settings (see #4439 for more details):

    • LC_ALL en_US.UTF-8

    • LANG en_US.UTF-8

    • LANGUAGE en_US.UTF-8

Warning

ARM64 Image builds might run out of memory if not provided with enough memory and/or swap space. On a Apple M1 we had to set the Docker Desktop memory limit to 12GB or more.

Debugging with Docker

The Docker containers have the possibility to enable Java Remote Debugging via Java environment variables.
Java Remote Debugging lets you use your preferred debugger connected to port 5005. For IntelliJ, you can use the Remote Java Debugging for Docker profile shipped in the git repository.

With the following Java environment variable, you can configure the Remote Java Debugging inside a container:

_JAVA_OPTIONS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005"
This is already pre-set in the Docker Compose Artemis-Dev-MySQL Setup.
For issues at the startup, you might have to suspend the java command until a Debugger is connected. This is possible by setting suspend=y.

Run the server via a run configuration in IntelliJ

The project comes with some pre-configured run / debug configurations that are stored in the .idea directory. When you import the project into IntelliJ the run configurations will also be imported.

The recommended way is to run the server and the client separately. This provides fast rebuilds of the server and hot module replacement in the client.

  • Artemis (Server): The server will be started separated from the client. The startup time decreases significantly.

  • Artemis (Client): Will execute npm install and npm run serve. The client will be available at http://localhost:9000/ with hot module replacement enabled (also see Client Setup).

Other run / debug configurations

  • Artemis (Server & Client): Will start the server and the client. The client will be available at http://localhost:8080/ with hot module replacement disabled.

  • Artemis (Server, Jenkins & GitLab): The server will be started separated from the client with the profiles dev,jenkins,gitlab,artemis instead of dev,bamboo,bitbucket,jira,artemis.

  • Artemis (Server, LocalVC & LocalCI): The server will be started separated from the client with the profiles dev,localci,localvc,artemis instead of dev,bamboo,bitbucket,jira,artemis. To use this configuration, Docker needs to be running on your system as the local CI system uses it to run build jobs.

  • Artemis (Server, LocalVC & LocalCI, Athena): The server will be started separated from the client with athena profile and Local VC / CI enabled (see Athena Service).

Run the server with Spring Boot and Spring profiles

The Artemis server should startup by running the main class de.tum.in.www1.artemis.ArtemisApp using Spring Boot.

Note

Artemis uses Spring profiles to segregate parts of the application configuration and make it only available in certain environments. For development purposes, the following program arguments can be used to enable the dev profile and the profiles for JIRA, Bitbucket and Bamboo:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling

If you use IntelliJ (Community or Ultimate) you can set the active profiles by

  • Choosing Run | Edit Configurations...

  • Going to the Configuration Tab

  • Expanding the Environment section to reveal VM Options and setting them to -Dspring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling

Set Spring profiles with IntelliJ Ultimate

If you use IntelliJ Ultimate, add the following entry to the section Active Profiles (within Spring Boot) in the server run configuration:

dev,bamboo,bitbucket,jira,artemis,scheduling

Run the server with the command line (Gradle wrapper)

If you want to run the application via the command line instead, make sure to pass the active profiles to the gradlew command like this:

./gradlew bootRun --args='--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling'

As an alternative, you might want to use Jenkins and GitLab with an internal user management in Artemis, then you would use the profiles:

dev,jenkins,gitlab,artemis,scheduling

Configure Text Assessment Analytics Service

Text Assessment Analytics is an internal analytics service used to gather data regarding the features of the text assessment process. Certain assessment events are tracked:

  1. Adding new feedback on a manually selected block

  2. Adding new feedback on an automatically selected block

  3. Deleting a feedback

  4. Editing/Discarding an automatically generated feedback

  5. Clicking the Submit button when assessing a text submission

  6. Clicking the Assess Next button when assessing a text submission

These events are tracked by attaching a POST call to the respective DOM elements on the client side. The POST call accesses the TextAssessmentEventResource which then adds the events in its respective table. This feature is disabled by default. We can enable it by modifying the configuration in the file: src/main/resources/config/application-artemis.yml like so:

info:
   text-assessment-analytics-enabled: true

Client Setup

You need to install Node and Npm on your local machine.

Using IntelliJ

If you are using IntelliJ you can use the pre-configured Artemis (Client) run configuration that will be delivered with this repository:

  • Choose Run | Edit Configurations...

  • Select the Artemis (Client) configuration from the npm section

  • Now you can run the configuration in the upper right corner of IntelliJ

Using the command line

You should be able to run the following command to install development tools and dependencies. You will only need to run this command when dependencies change in package.json.

npm install

To start the client application in the browser, use the following command:

npm run serve

This compiles TypeScript code to JavaScript code, starts the live reloading feature (i.e. whenever you change a TypeScript file and save, the client is automatically reloaded with the new code) and will start the client application in your browser on http://localhost:9000. If you have activated the JIRA profile (see above in Server Setup) and if you have configured application-artemis.yml correctly, then you should be able to login with your TUM Online account.

Hint

In case you encounter any problems regarding JavaScript heap memory leaks when executing npm run serve or any other scripts from package.json, you can adjust a memory limit parameter (node-options=--max-old-space-size=6144) which is set by default in the project-wide .npmrc file.

If you still face the issue, you can try to set a lower/higher value than 6144 MB. Recommended values are 3072 (3GB), 4096 (4GB), 5120 (5GB) , 6144 (6GB), 7168 (7GB), and 8192 (8GB).

You can override the project-wide .npmrc file by using a per-user config file (~/.npmrc).

Make sure to not commit changes in the project-wide .npmrc unless the Github build also needs these settings.

For more information, review Working with Angular. For further instructions on how to develop with JHipster, have a look at Using JHipster in development.


Customize your Artemis instance

You can define the following custom assets for Artemis to be used instead of the TUM defaults:

  • The logo next to the “Artemis” heading on the navbar → ${artemisRunDirectory}/public/images/logo.png

  • The favicon → ${artemisRunDirectory}/logo/favicon.svg

  • The contact email address in the application-{dev,prod}.yml configuration file under the key info.contact


Privacy Statement and Imprint

The privacy statement and the imprint are stored in the ./legal directory by default. You can override this value by setting the artemis.legal-path value in the application-artemis.yml. The privacy statement and the imprint are stored as markdown files. Currently, English and German as languages are supported. The documents have to follow the naming convention <privacy_statement|imprint>_<de|en>.md. In case you add only a file for one language, this file will always be shown regardless of the user’s language setting. If you add a file for each language, the file will be shown depending on the user’s language setting.

In the following, the documentation provides a template in English and German for the privacy statement and the imprint with placeholders that have to be replaced with the actual content.

Warning

These are only templates that are used similarly at TUM and need to be adapted to your needs. Make sure to consult your data protection officer and/or legal department before making the privacy statement/imprint publicly available. We do not take any responsibility for the content of the privacy statement or the imprint.


Privacy Statement German template

 # Datenschutz

 Die <Universität> nimmt den Schutz von personenbezogenen Daten sehr ernst und nutzt eine sichere und verschlüsselte Kommunikation nach
 bewährten Verfahren und modernsten Technologien (z.B. HTTPS mit sicherem Zertifikat der TUM, TLS 1.3, Strict Transport Security, Forward Secrecy, Same Site Cookie Schutz) um die
 Privatsphäre der Nutzer von Artemis bestmöglich zu schützen. Artemis verarbeitet personenbezogene Daten im Rahmen der Lehre und im Rahmen von Prüfungen unter Beachtung der
 geltenden datenschutzrechtlichen Bestimmungen. Die Rechtsgrundlage für die Verarbeitung der Daten stellt Art. 6 Abs. 1 Lit. c (Rechtliche Verpflichtung)
 der [Datenschutz Grundverordnung (DSGVO)](http://data.europa.eu/eli/reg/2016/679/oj) dar. Darüber hinaus gelten <weitere anwendbare landesspezifische Regelungen>.

 Nachfolgend informieren wir über Art, Umfang und Zweck der Erhebung und Verwendung personenbezogener Daten. Diese Informationen können jederzeit von unserer Webseite abgerufen
 werden.

 ## Allgemeine Informationen

 #### Name und Kontaktdaten des Verantwortlichen

 <Universität>
 Postanschrift: <Adresse>
 Telefon: <Telefonnummer>
 E-Mail: <Email>

 #### Kontaktdaten des/der Datenschutzbeauftragten

 Der/Die Datenschutzbeauftragte der <Universität>
 Postanschrift: <Adresse>
 Telefon: <Telefonnummer>
 E-Mail: <Email>

 #### Zwecke und Rechtsgrundlagen für die Verarbeitung personenbezogener Daten

 Zweck der Verarbeitung ist die Erfüllung der uns vom Gesetzgeber zugewiesenen öffentlichen Aufgaben, insbesondere der Lehre und der Prüfung im universitären Umfeld. Die Rechtsgrundlage für die Verarbeitung Ihrer Daten ergibt sich, soweit nichts anderes angegeben ist, aus Art. 6 Abs. 1 Lit. c (Rechtliche Verpflichtung) der [Datenschutz Grundverordnung (DSGVO)](http://data.europa.eu/eli/reg/2016/679/oj). Darüber hinaus gelten <weitere anwendbare landesspezifische Regelungen>. Demnach ist es uns erlaubt, die zur Erfüllung einer uns obliegenden Aufgabe erforderlichen Daten zu verarbeiten.

 #### Empfänger von personenbezogenen Daten

 Der technische Betrieb unserer Datenverarbeitungssysteme erfolgt durch:
 <zuständige Stelle für den Betrieb>

 <Adresse>
 Telefon: <Telefonnummer>
 E-Mail: <Email>
 <Webseite>

 Gegebenenfalls werden Ihre Daten an die zuständigen Aufsichts- und Rechnungsprüfungsbehörden zur Wahrnehmung der jeweiligen Kontrollrechte übermittelt.

 <Falls anwendbar, geben Sie hier an, an wen die Daten potentiell weitergeleitet werden um die Informationssicherheit sicherzustellen>

 #### Dauer der Speicherung der personenbezogenen Daten

 Ihre Daten werden nur so lange gespeichert, wie dies unter Beachtung gesetzlicher Aufbewahrungsfristen zur Aufgabenerfüllung erforderlich ist.

 #### Ihre Rechte

Soweit wir von Ihnen personenbezogene Daten verarbeiten, stehen Ihnen als Betroffener nachfolgende Rechte zu:

 * Sie haben das Recht auf Auskunft über die zu Ihrer Person gespeicherten Daten (Art. 15 DSGVO).
 * Sollten unrichtige personenbezogene Daten verarbeitet werden, steht Ihnen ein Recht auf Berichtigung zu (Art. 16 DSGVO).
 * Liegen die gesetzlichen Voraussetzungen vor, so können Sie die Löschung oder Einschränkung der Verarbeitung verlangen (Art. 17 und 18 DSGVO).
 * Wenn Sie in die Verarbeitung eingewilligt haben oder ein Vertrag zur Datenverarbeitung besteht und die Datenverarbeitung mithilfe automatisierter Verfahren durchgeführt wird, steht Ihnen gegebenenfalls ein Recht auf Datenübertragbarkeit zu (Art. 20 DSGVO).
 * Falls Sie in die Verarbeitung eingewilligt haben und die Verarbeitung auf dieser Einwilligung beruht, können Sie die Einwilligung jederzeit für die Zukunft widerrufen. Die Rechtmäßigkeit der aufgrund der Einwilligung bis zum Widerruf erfolgten Datenverarbeitung wird durch diesen nicht berührt.

 Sie haben das Recht, aus Gründen, die sich aus Ihrer besonderen Situation ergeben, jederzeit gegen die Verarbeitung Ihrer Daten Widerspruch einzulegen, wenn die Verarbeitung ausschließlich auf Grundlage des Art. 6 Abs. 1 Buchst. e oder f DSGVO erfolgt (Art. 21 Abs. 1 Satz 1 DSGVO).

 #### Beschwerderecht bei der Aufsichtsbehörde

 Weiterhin besteht ein Beschwerderecht beim <Verantwortliche Person>. Diesen können Sie unter folgenden Kontaktdaten erreichen:
 Postanschrift: <Postanschrift>
 Adresse: <Adresse>
 Telefon: <Telefonnummer>
 E-Mail: <Email>
 <Webseite>

 #### Weitere Informationen

 Für nähere Informationen zur Verarbeitung Ihrer Daten und zu Ihren Rechten können Sie uns unter den oben (zu Beginn von A.) genannten Kontaktdaten erreichen.

 ## Informationen zum Internetauftritt

 #### Technische Umsetzung

 Die Webserver von Artemis werden durch <Betreiber> betrieben. Die von Ihnen im Rahmen des Besuchs
 unseres Webauftritts übermittelten personenbezogenen Daten werden daher in unserem Auftrag durch <Betreiber> verarbeitet:

 <Betreiber>
 <Straße und Hausnummer>
 <Postleitzahl> <Ort>
 <Telefonnummer>
 E-Mail: <Email>
 <Webseite>

 #### Protokollierung

 Wenn Sie diese oder andere Internetseiten von Artemis aufrufen, übermitteln Sie über Ihren Internetbrowser Daten an unsere Webserver. Die folgenden Daten werden während einer laufenden Verbindung zur Kommunikation zwischen Ihrem Internetbrowser und unseren Webservern temporär in einer Logdatei aufgezeichnet:

 * IP-Adresse des anfragenden Rechners
 * Datum und Uhrzeit des Zugriffs
 * Name, URL und übertragene Datenmenge der abgerufenen Datei
 * Zugriffsstatus (angeforderte Datei übertragen, nicht gefunden etc.)
 * Erkennungsdaten des verwendeten Browser- und Betriebssystems (sofern vom anfragenden Webbrowser übermittelt)
 * Webseite, von der aus der Zugriff erfolgte (sofern vom anfragenden Webbrowser übermittelt)

 Die Verarbeitung der Daten in dieser Logdatei kann wie folgt geschehen:

 * Die Logeinträge können kontinuierlich und automatisch ausgewertet werden, um Angriffe auf die Webserver erkennen und entsprechend reagieren zu können.
 * In Einzelfällen, d.h. bei gemeldeten Störungen, Fehlern und Sicherheitsvorfällen, kann eine manuelle Analyse erfolgen.

 #### Cookies

 Um den Funktionsumfang unseres Internetangebotes zu erweitern und die Nutzung für Sie komfortabler zu gestalten, verwenden wir zum Teil so genannte "Cookies". Mit Hilfe dieser Cookies können bei dem Aufruf unserer Webseite Daten auf Ihrem Rechner gespeichert werden. Sie können das Speichern von Cookies jedoch deaktivieren oder Ihren Browser so einstellen, dass Cookies nur für die Dauer der jeweiligen Verbindung zum Internet gespeichert werden. Hierdurch könnte allerdings der Funktionsumfang unseres Angebotes eingeschränkt werden.

 ## Informationen zu einzelnen Verarbeitungen

 #### Anmeldung

 Bei Ihrer Anmeldung am System werden Ihre personenbezogenen Daten gegenüber dem Verzeichnisdienst der <Universität> verifiziert.

 #### Auskunft und Berichtigung

 Sie haben das Recht, auf schriftlichen Antrag und unentgeltlich Auskunft über die personenbezogenen Daten zu erhalten, die über Sie gespeichert sind. Zusätzlich haben Sie das Recht auf Berichtigung unrichtiger Daten. Den behördlichen Datenschutzbeauftragten der <Universität> erreichen Sie per E-Mail unter <Email Datenschutzbeauftragter> oder über <Link zum Datenschutzbeauftragten>.

Privacy Statement English template

# Privacy

The <University> takes the protection of your personal data very seriously and uses secure and encrypted communication according to best practices and state-of-the-art technologies (e.g. HTTPS with secure certificate of TUM, TLS 1.3, Strict Transport Security, Forward Secrecy, Same Site Cookie protection) to protect the privacy of Artemis users in the best possible way. Artemis processes personal data in the context of teaching and in the context of examinations in compliance with the applicable data protection regulations.
The legal basis for the processing of data is Art. 6(1) lit. c (Legal Obligation) of the General Data Protection Regulation (GDPR).
In addition, <additional federal or country-specific rules> apply.

In the following, we provide information on the type, scope and purpose of the collection and use of personal data. This information can be accessed at any time from our website.

## General Information

### Name and contact details of the person responsible

<University>
Postal address: <Postal address>
Telephone: <Telephone number>
Email: <Email>

### Name and contact details of the data protection officer

The data protection officer of the <University>
Postal address: <Postal address>
Telephone: <Telephone number>
E-mail: <Email>

### Purpose and legal basis for the processing of personal data

The purpose of the processing is to fulfill the public duties assigned to us by the legislator, in particular teaching and examination in the university environment. Unless otherwise stated, the legal basis for processing your data results from Art. 6(1) lit. c (Legal Obligation) of the General Data Protection Regulation (GDPR).
In addition,<additional federal or country-specific rules> apply. Accordingly, we are permitted to process the data required to fulfill a duty incumbent upon us.

### Recipients of personal data

The technical operation of our data processing systems is carried out by:

<Operator>
<Street and house number>
<Zip code> <City>
Telephone: <Telephone number>
E-mail: <Email>
<Website>

If necessary, your data will be transmitted to the responsible supervisory and auditing authorities for the exercise of the respective control rights.

<If applicable add a paragraph to which authority data may be forwarded to ensure information security and the legal basis for this>

### Duration of the storage of personal data

Your data will only be stored for as long as is necessary for the fulfillment of duties, taking into account statutory retention periods.

### Your rights

Insofar as we process personal data from you, you are entitled to the following rights as a data subject:

* You have the right of access (Art. 15 GDPR).
* If incorrect personal data is processed, you have the right to rectification (Art. 16 GDPR).
* If the legal requirements are met, you may request the deletion or restriction of processing (Art. 17 and 18 GDPR).
* If you have consented to the processing or if there is a contract for data processing and the data processing is carried out with the help of automated procedures, you may have a right to data portability (Art. 20 GDPR).
* If you have consented to the processing and the processing is based on this consent, you can revoke the consent at any time for the future. The lawfulness of the data processing carried out on the basis of the consent until the revocation is not affected by it.

You have the right to object to the processing of your data at any time on grounds relating to your particular situation, if the processing is carried out exclusively on the basis of Art. 6(1) lit. e or f GDPR (Art. 21(1)(1) GDPR).

### Right to appeal at the supervisory authority

Furthermore, you have the right to appeal at the <supervisory authority>
You can reach them under the following contact details:

Postal address: <Postal address>
Address: <Address>
Telephone: <Telephone number>
Email: <Email>
<Website>

#### Further Information

For more detailed information on the processing of your data and your rights, you can contact us using the contact details provided above (at the beginning of A.).

## Information about the web presence

### Technical implementation

The web servers of Artemis are operated by the <Operator>. The personal data you provide when
visiting our website is therefore processed on our behalf by <Operator>:

<Operator> <Street and house number>
<Zip code> <City>
Telephone: <Telephone number>
Email: <Email>
<Website>

#### Logging

When you access this or other Artemis web pages, you transmit data to our web servers via your Internet browser. The following data is temporarily recorded in a log file during an ongoing connection for communication between your Internet browser and our web servers:

* IP address of the requesting computer
* Date and time of access
* Name, URL and transferred data volume of the retrieved file
* Access status (requested file transferred, not found, etc.)
* Identification data of the browser and operating system used (if transmitted by the requesting web browser)
* Web page from which access was made (if transmitted by the requesting web browser)

The processing of the data in this log file can be done as follows:
The log entries can be continuously and automatically evaluated in order to detect attacks on the web servers and react accordingly.
In individual cases, i.e. in the event of reported malfunctions, errors and security incidents, a manual analysis may be carried out.

#### Cookies

In order to extend the range of functions of our Internet offering and to make its use more comfortable for you, we partly use so-called "cookies". With the help of these cookies, data can be stored on your computer when you call up our website. However, you can deactivate the storage of cookies or set your browser so that cookies are only stored for the duration of the respective connection to the Internet. This could, however, limit the functional scope of our offering.

## Information on individual processing operations

#### Login

When you log in to the system, your personal data will be verified with the directory service of the <University>.

#### Disclosure and rectification

You have the right, upon written request and free of charge, to obtain information about the personal data stored about you. In addition, you have the right to have incorrect data corrected. You can reach the data protection officer of <University by e-mail at <Email> or via <Website>.

Imprint German template

# Impressum

#### Herausgeber

<Universität>
Postanschrift: <Postanschrift>
Telefon: <Telefonnummer>
Telefax: <Faxnummer>
E-Mail: <E-Mail-Adresse>

#### Vertretungsberechtigt

Die <Universität> wird gesetzlich vertreten durch den Präsidenten <Präsident>

#### Umsatzsteueridentifikationsnummer

<Umsatzsteueridentifikationsnummer> (gemäß § 27a Umsatzsteuergesetz)

#### Verantwortlich für den Inhalt

<Vor- und Nachname>
<Straße und Hausnummer>
<PLZ> <Ort>

#### Nutzungsbedingungen

Texte, Bilder, Grafiken sowie die Gestaltung dieser Internetseiten können dem Urheberrecht unterliegen. Nicht urheberrechtlich geschützt sind nach § 5 des Urheberrechtsgesetz (UrhG)

* Gesetze, Verordnungen, amtliche Erlasse und Bekanntmachungen sowie Entscheidungen und amtlich verfasste Leitsätze zu Entscheidungen und
* andere amtliche Werke, die im amtlichen Interesse zur allgemeinen Kenntnisnahme veröffentlicht worden sind, mit der Einschränkung, dass die Bestimmungen über Änderungsverbot und Quellenangabe in § 62 Abs. 1 bis 3 und § 63 Abs. 1 und 2 UrhG entsprechend anzuwenden sind.

Als Privatperson dürfen Sie urheberrechtlich geschütztes Material zum privaten und sonstigen eigenen Gebrauch im Rahmen des § 53 UrhG verwenden. Eine Vervielfältigung oder
Verwendung urheberrechtlich geschützten Materials dieser Seiten oder Teilen davon in anderen elektronischen oder gedruckten Publikationen und deren Veröffentlichung ist nur mit
unserer Einwilligung gestattet. Diese Einwilligung erteilen auf Anfrage die für den Inhalt Verantwortlichen. Der Nachdruck und die Auswertung von Pressemitteilungen und Reden sind
mit Quellenangabe allgemein gestattet. Weiterhin können Texte, Bilder, Grafiken und sonstige Dateien ganz oder teilweise dem Urheberrecht Dritter unterliegen. Auch über das
Bestehen möglicher Rechte Dritter geben Ihnen die für den Inhalt Verantwortlichen nähere Auskünfte.

#### Haftungsausschluss

Alle auf dieser Internetseite bereitgestellten Informationen haben wir nach bestem Wissen und Gewissen erarbeitet und geprüft. Eine Gewähr für die jederzeitige Aktualität,
Richtigkeit, Vollständigkeit und Verfügbarkeit der bereit gestellten Informationen können wir allerdings nicht übernehmen. Ein Vertragsverhältnis mit den Nutzern des
Internetangebots kommt nicht zustande.

Wir haften nicht für Schäden, die durch die Nutzung dieses Internetangebots entstehen. Dieser Haftungsausschluss gilt nicht, soweit die Vorschriften des § 839 BGB (Haftung bei
Amtspflichtverletzung) einschlägig sind. Für etwaige Schäden, die beim Aufrufen oder Herunterladen von Daten durch Schadsoftware oder der Installation oder Nutzung von Software
verursacht werden, übernehmen wir keine Haftung.

Falls im Einzelfall erforderlich: Der Haftungsausschluss gilt nicht für Informationen, die in den Anwendungsbereich der Europäischen Dienstleistungsrichtlinie (Richtlinie
2006/123/EG – DLRL) fallen. Für diese Informationen wird die Richtigkeit und Aktualität gewährleistet.

#### Links

Von unseren eigenen Inhalten sind Querverweise („Links“) auf die Webseiten anderer Anbieter zu unterscheiden. Durch diese Links ermöglichen wir lediglich den Zugang zur Nutzung
fremder Inhalte nach § 8 Telemediengesetz. Bei der erstmaligen Verknüpfung mit diesen Internetangeboten haben wir diese fremden Inhalte daraufhin überprüft, ob durch sie eine
mögliche zivilrechtliche oder strafrechtliche Verantwortlichkeit ausgelöst wird. Wir können diese fremden Inhalte aber nicht ständig auf Veränderungen überprüfen und daher auch
keine Verantwortung dafür übernehmen. Für illegale, fehlerhafte oder unvollständige Inhalte und insbesondere für Schäden, die aus der Nutzung oder Nichtnutzung von Informationen
Dritter entstehen, haftet allein der jeweilige Anbieter der Seite.

Imprint English template

# Imprint

#### Publisher

<University>
Postal address: <Postal address>
Telephone: <Telephone number>
Fax: <Fax number>
Email: <Email address>

#### Authorized to represent

The <University> is legally represented by the President <President>.

#### VAT identification number

<VAT identification number> (in accordance with § 27a of the German VAT tax act - UStG)

#### Responsible for content

<First name> <Last name>
<Street and house number>
<Zip code> <City>

#### Terms of use

Texts, images, graphics as well as the design of these Internet pages may be subject to copyright.
The following are not protected by copyright according to §5 of copyright law (Urheberrechtsgesetz (UrhG)).

Laws, ordinances, official decrees and announcements as well as decisions and officially written guidelines for
decisions and other official works that have been published in the official interest for general knowledge,
with the restriction that the provisions on prohibition of modification and indication of source in Section 62 (1) to (3) and
Section 63 (1) and (2) UrhG apply accordingly.

As a private individual, you may use copyrighted material for private and other personal use within the scope of Section 53 UrhG.
Any duplication or use of objects such as images, diagrams, sounds or texts in other electronic or printed publications is not permitted without our agreement.
This consent will be granted upon request by the person responsible for the content.
The reprinting and evaluation of press releases and speeches are generally permitted with reference to the source.
Furthermore, texts, images, graphics and other files may be subject in whole or in part to the copyright of third parties.
The persons responsible for the content will also provide more detailed information on the existence of possible third-party rights.

#### Liability disclaimer

The information provided on this website has been collected and verified to the best of our knowledge and belief.
However, there will be no warranty that the information provided is up-to-date, correct, complete, and available.
There is no contractual relationship with users of this website.

We accept no liability for any loss or damage caused by using this website. The exclusion of liability does not apply where the provisions of the German Civil Code (BGB) on
liability in case of breach of official duty are applicable (§ 839 of the BGB). We accept no liability for any loss or damage caused by malware when accessing or downloading data
or the installation or use of software from this website.

Where necessary in individual cases: the exclusion of liability does not apply to information governed by the Directive 2006/123/EC of the European Parliament and of the Council.
This information is guaranteed to be accurate and up to date.

#### Links

Our own content is to be distinguished from cross-references (“links”) to websites of other providers.
These links only provide access for using third-party content in accordance with § 8 of the German telemedia act (TMG).
Prior to providing links to other websites, we review third-party content for potential civil or criminal liability.
However, a continuous review of third-party content for changes is not possible, and therefore we cannot accept any responsibility.
For illegal, incorrect, or incomplete content, including any damage arising from the use or non-use of third-party information,
liability rests solely with the provider of the website.

Programming Exercise adjustments

There are several variables that can be configured when using programming exercises. They are presented in this separate section to keep the ‘normal’ setup guide shorter.

Path variables

There are variables for several paths:

  • artemis.repo-clone-path

    Repositories that the Artemis server needs are stored in this folder. This e.g. affects repositories from students which use the online code editor or the template/solution repositories of new exercises, as they are pushed to the VCS after modification.

    Files in this directory are usually not critical, as the latest pushed version of these repositories are also stored at the VCS. However, changed that are saved in the online code editor but not yet committed will be lost when this folder is deleted.

  • artemis.repo-download-clone-path

    Repositories that were downloaded from Artemis are stored in this directory.

    Files in this directory can be removed without loss of data, if the downloaded repositories are still present at the VCS. No changes to the data in the VCS are stored in this directory (or they can be retrieved by performing the download-action again).

  • artemis.template-path

    Templates are available within Artemis. The templates should fit to most environments, but there might be cases where one wants to change the templates.

    This value specifies the path to the templates which should overwrite the default ones. Note that this is the path to the folder where the templates folder is located, not the path to the templates folder itself.

Templates

Templates are shipped with Artemis (they can be found within the src/main/resources/templates folder in GitHub). These templates should fit well for many deployments, but one might want to change some of them for special deployments.

As of now, you can overwrite the jenkins folders that is present within the src/main/resources/templates folder by placing a templates/ directory with the same structure next to the Artemis .war archive. Files that are present in the file system will be used, if a file is not present in the file system, it is loaded from the classpath (e.g. the .war archive).

We plan to make other folders configurable as well, but this is not supported yet.

Jenkins template

The build process in Jenkins is stored in a config.xml-file (in src/main/resources/templates/jenkins/). It is extended by a Jenkinsfile in the same directory that will be placed inside the config.xml file. The Jenkinsfile handles the functionality shared by all programming languages like checking out the repositories and loading the actual exercise-specific pipeline script from the Artemis server.

Note

When overriding the Jenkinsfile with a custom one, note that it must start either

  • with pipeline (there must not be a comment before pipeline, but there can be one at any other position, if the Jenkinsfile-syntax allows it)

  • or the special comment // ARTEMIS: JenkinsPipeline in the first line.

The actual programming language or exercise-type specific pipeline steps are defined in the form of scripted pipelines. In principle, this is a Groovy script which allows structuring the pipeline into smaller methods and allows conditionally executing steps, but inside still allows the core structure blocks from declarative pipelines. You can override those pipeline.groovy files with the template mechanism described above.

Inside the pipeline.groovy some placeholders exist that will be filled by Artemis upon exercise creation from the server or exercise settings:

pipeline.groovy placeholders

Variable

Replacement

Origin

#dockerImage

The container image that the tests will run in.

Server configuration

#dockerArgs

Additional flags passed to Docker when starting the container.

Server configuration

#isStaticCodeAnalysisEnabled

Defines if static code analysis should be performed.

Exercise configuration

#isTestWiseCoverageEnabled

Defines if testwise coverage should be collected.

Exercise configuration

The pipeline.groovy file can be customized further by instructors after creating the exercise from within Artemis via the ‘Edit Build Plan’ button on the details page of the exercise.

Caching example for Maven

The container image used to run the maven-tests already contains a set of commonly used dependencies (see artemis-maven-docker). This significantly speeds up builds as the dependencies do not have to be downloaded every time a build is started. However, the dependencies included in the container image might not match the dependencies required in your tests (e.g. because you added new dependencies or the container image is outdated).

You can cache the maven-dependencies also on the machine that runs the builds (that means, outside the container) by editing the pipeline.groovy template.

Adjust the dockerFlags variable:

dockerFlags = '#dockerArgs -v artemis_maven_cache:/maven_cache -e MAVEN_OPTS="-Dmaven.repo.local=/maven_cache/repository"'

Note that this might allow students to access shared resources (e.g. jars used by Maven), and they might be able to overwrite them. You can use Ares to prevent this by restricting the resources the student’s code can access.

Alternatively, you can restrict the access to the mounted volume by changing the dockerFlags to

dockerFlags = '#dockerArgs -e MAVEN_OPTS="-Dmaven.repo.local=/maven_cache/repository"'

and changing the testRunner method into

void testRunner() {
    setDockerFlags()

    docker.image(dockerImage).inside(dockerFlags) { c ->
        runTestSteps()
    }
}

private void setDockerFlags() {
    if (isSolutionBuild) {
        dockerFlags += " -v artemis_maven_cache:/maven_cache"
    } else {
        dockerFlags += " -v artemis_maven_cache:/maven_cache:ro"
    }
}

This mounts the cache as writeable only when executing the tests for the solution repository, and as read-only when running the tests for students’ code.

Caching example for Gradle

In case of always writeable caches you can set -e GRADLE_USER_HOME=/gradle_cache as part of the dockerFlags instead of the MAVEN_OPTS like above.

For read-only caches like in the Maven example, define setDockerFlags() as

private void setDockerFlags() {
    if (isSolutionBuild) {
        dockerFlags += ' -e GRADLE_USER_HOME="/gradle_cache"'
        dockerFlags += ' -v artemis_gradle_cache:/gradle_cache'
    } else {
        dockerFlags += ' -e GRADLE_RO_DEP_CACHE="/gradle_cache/caches/"'
        dockerFlags += ' -v artemis_gradle_cache:/gradle_cache:ro'
    }
}

Bamboo, Bitbucket and Jira Setup

This section describes how to set up a programming exercise environment based on Bamboo, Bitbucket and Jira.

Please note that this setup will create a deployment that is very similar to the one used in production but has one difference:
In production, the builds are performed within Docker containers that are created by Bamboo (or its build agents). As we run Bamboo in a Docker container in this setup, creating new Docker containers within that container is not recommended (e.g. see this article). There are some solution where one can pass the Docker socket to the Bamboo container, but none of these approaches work quite well here as Bamboo uses mounted directories that cause issues.

Therefore, a check is included within the BambooBuildPlanService that ensures that builds are not started in Docker agents if the development setup is present.

Prerequisites:

Docker-Compose

Before you start the docker compose, check if the bamboo version in the build.gradle (search for com.atlassian.bamboo:bamboo-specs) is equal to the bamboo version number in the docker compose in docker/atlassian.yml If the version number is not equal, adjust the version number. Further details about the docker compose setup can be found in docker

Execute the docker compose file e.g. with docker compose -f docker/atlassian.yml up -d.

Error Handling: It can happen that there is an overload with other docker networks ERROR: Pool overlaps with other one on this address space. Use the command docker network prune to resolve this issue.

Make sure that docker has enough memory (~ 6GB). To adapt it, go to Settings Resources

In case you want to enable Swift or C programming exercises, refer to the readme in docker

Configure Bamboo, Bitbucket and Jira

By default, the Jira instance is reachable under localhost:8081, the Bamboo instance under localhost:8085 and the Bitbucket instance under localhost:7990.

Get evaluation licenses for Atlassian products: Atlassian Licenses

  1. Get licenses for Bamboo, Bitbucket and Jira Service Management.

    • Bamboo: Select Bamboo (Data Center) and not installed yet

    • Bitbucket: Select Bitbucket (Data Center) and not installed yet

    • Jira: Select Jira Service Management (formerly Service Desk) (Data Center) and not installed yet

  2. Provide the just created license key during the setup and create an admin user with the same credentials in all 3 applications. - Bamboo:

    • Choose the H2 database

    • Select the evaluation/internal/test/dev setups if you are asked

    • Put the admin username and password into application-local.yml at artemis.version-control.user and artemis.continuous-integration.user.

    • Jira:

    • On startup select I'll set it up myself

    • Select Build In Database Connection

    • Create a sample project

    • Bitbucket: Do not connect Bitbucket with Jira yet

  3. Make sure that Jira, Bitbucket and Bamboo have finished starting up.

    Execute the shell script atlassian-setup.sh in the docker/atlassian directory (e.g. with ./docker/atlassian/atlassian-setup.sh). This script creates groups, users and assigns the user to their respective group.

  4. The script (step 3) has already created the required users and assigned them to their respective group in Jira. Now, make sure that they are assigned correctly according to the following test setup: users 1-5 are students, 6-10 are tutors, 11-15 are editors and 16-20 are instructors. The usernames are artemis_test_user_{1-20} and the password is again the username. When you create a course in artemis you have to manually choose the created groups (students, tutors, editors, instructors).

  5. Use the user directories in Jira to synchronize the users in bitbucket and bamboo:

    ../../_images/jira_add_application_bitbucket.png
    ../../_images/jira_add_application_bamboo.png
    • Go to Bitbucket → User Directories and Bamboo → User Directories → Add Directories → Atlassian Crowd → use the URL http://jira:8080 as Server URL → use the application name and password which you used in the previous step. Also, you should decrease the synchronisation period (e.g. to 2 minutes).

    ../../_images/user_directories_bitbucket.png

    Adding Crowd Server in Bitbucket

    ../../_images/user_directories_bamboo.png

    Adding Crowd Server in Bamboo

    • Press synchronise after adding the directory, the users and groups should now be available.

  6. Give the test users User access on Bitbucket: On the Administration interface (settings cogwheel on the top), go to the Global permissions. Type the names of all test users in the search field (“Add Users”) and give them the “Bitbucket User” permission. If you skip this step, the users will not be able to log in to Bitbucket or clone repositories.

  7. In Bamboo create a global variable named SERVER_PLUGIN_SECRET_PASSWORD, the value of this variable will be used as the secret. The value of this variable should be then stored in src/main/resources/config/application-local.yml as the value of artemis-authentication-token-value. You can create a global variable from settings on Bamboo.

  8. In Bamboo create a shared username and password credential where the username and password should be the same as the ones you used to create the Bitbucket admin user. The name of the shared credential must be equal to the value set in artemis.version-control.user.

    The shared user can be created via Bamboo → Bamboo Administration → Shared credentials → Add new credentials → Username and password

  9. Download the bamboo-server-notification-plugin and add it to bamboo. Go to Bamboo → Manage apps → Upload app → select the downloaded .jar file → Upload

  10. Authorize the Bamboo agent. Bamboo Administration → Agents → Remote agents → Agent authentication

    Approve the agent and edit the IP address in a development setup to *.*.*.* as the Docker container doesn’t have a static IP address.

    ../../_images/bamboo_agent_configuration.png
  11. Generate a personal access token

    While username and password can still be used as a fallback, this option is already marked as deprecated and will be removed in the future.

    1. Personal access token for Bamboo:

      artemis:
          continuous-integration:
              user: <username>
              password: <password>
              token: #insert the token here
      
    2. Personal access token for Bitbucket:

      artemis:
          version-control:
              user: <username>
              password: <password>
              token: #insert the token here
      
  12. Add a SSH key for the admin user

    Artemis can clone/push the repositories during setup and for the online code editor using SSH. If the SSH key is not present, the username + token will be used as fallback (and all git operations will use HTTP(S) instead of SSH). If the token is also not present, the username + password will be used as fallback (again, using HTTP(S)).

    You first have to create a SSH key (locally), e.g. using ssh-keygen (more information on how to create a SSH key can be found e.g. at ssh.com or at atlassian.com).

    The list of supported ciphers can be found at Apache Mina.

    It is recommended to use a password to secure the private key, but it is not mandatory.

    Please note that the private key file must be named id_rsa, id_dsa, id_ecdsa or id_ed25519, depending on the ciphers used.

    You now have to extract the public key and add it to Bitbucket. Open the public key file (usually called id_rsa.pub (when using RSA)) and copy it’s content (you can also use cat id_rsa.pub to show the public key).

    Navigate to BITBUCKET-URL/plugins/servlet/ssh/account/keys and add the SSH key by pasting the content of the public key.

    <ssh-private-key-folder-path> is the path to the folder containing the id_rsa file (but without the filename). It will be used in the configuration of Artemis to specify where Artemis should look for the key and store the known_hosts file.

    <ssh-private-key-password> is the password used to secure the private key. It is also needed for the configuration of Artemis, but can be omitted if no password was set (e.g. for development environments).

Configure Artemis

  1. Modify src/main/resources/config/application-local.yml to include the correct URLs and credentials:

    repo-clone-path: ./repos/
    repo-download-clone-path: ./repos-download/
    bcrypt-salt-rounds: 11   # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                             # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
    user-management:
        use-external: true
        external:
            url: http://localhost:8081
            user:  <jira-admin-user>
            password: <jira-admin-password>
            admin-group-name: instructors
        internal-admin:
            username: artemis_admin
            password: artemis_admin
    version-control:
        url: http://localhost:7990
        user:  <bitbucket-admin-user>
        password: <bitbucket-admin-password>
        token: <bitbucket-admin-token>   # step 10.2
        ssh-private-key-folder-path: <ssh-private-key-folder-path>
        ssh-private-key-password: <ssh-private-key-password>
    continuous-integration:
        url: http://localhost:8085
        user:  <bamboo-admin-user>
        password: <bamboo-admin-password>
        token: <bamboo-admin-token>   # step 10.1
        artemis-authentication-token-value: <artemis-authentication-token-value>   # step 7
    

If you run the Atlassian suite in containers and Artemis on your host machine, you may have to set internal urls for bamboo, so that the CI and VCS servers are reachable from each other. If Artemis is executed in a container in the same network, you won’t need to specify internal URLs, as Artemis can then communicate with Bamboo and Bitbucket and Bamboo and Bitbucket can communicate with each other using the same url. If you use the default docker-compose setup, you can use the following configuration:

bamboo:
    internal-urls:
        ci-url: http://bamboo:8085
        vcs-url: http://bitbucket:7990
  1. Also, set the server URL in src/main/resources/config/application-local.yml:

    server:
        port: 8080                                         # The port of artemis
        url: http://172.20.0.1:8080                        # needs to be an ip
        # url: http://docker.for.mac.host.internal:8080   # If the above one does not work for mac try this one
        # url: http://host.docker.internal:8080           # If the above one does not work for windows try this one
    

In addition, you have to start Artemis with the profiles bamboo, bitbucket and jira so that the correct adapters will be used, e.g.:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,local

All of these profiles are enabled by default when using one of the run configurations in IntelliJ. Please read Server Setup for more details.

How to verify the connection works?

Artemis → Jira

You can login to Artemis with the admin user you created in Jira

Artemis → Bitbucket

You can create a programming exercise

Artemis → Bamboo

You can create a programming exercise

Bitbucket → Bamboo

The build of the students repository gets started after pushing to it

Bitbucket → Artemis

When using the code editor, after clicking on Submit, the text Building and testing… should appear.

Bamboo → Artemis

The build result is displayed in the code editor.


Jenkins and GitLab Setup

This section describes how to set up a programming exercise environment based on Jenkins and GitLab. Optional commands are in curly brackets {}.

The following assumes that all instances run on separate servers. If you have one single server, or your own NGINX instance, just skip all NGINX related steps and use the configurations provided under Separate NGINX Configurations

If you want to setup everything on your local development computer, ignore all NGINX related steps. Just make sure that you use unique port mappings for your Docker containers (e.g. 8081 for GitLab, 8082 for Jenkins, 8080 for Artemis)

Prerequisites:

Make sure that docker has enough memory (~ 6GB). To adapt it, go to Preferences -> Resources and restart Docker.

Artemis

In order to use Artemis with Jenkins as Continuous Integration Server and Gitlab as Version Control Server, you have to configure the file application-prod.yml (Production Server) or application-artemis.yml (Local Development) accordingly. Please note that all values in <..> have to be configured properly. These values will be explained below in the corresponding sections. If you want to set up a local environment, copy the values below into your application-artemis.yml or application-local.yml file (the latter is recommended), and follow the Gitlab Server Quickstart guide.

artemis:
 course-archives-path: ./exports/courses
 repo-clone-path: ./repos
 repo-download-clone-path: ./repos-download
 bcrypt-salt-rounds: 11  # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                         # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
 user-management:
     use-external: false
     internal-admin:
         username: artemis_admin
         password: artemis_admin
     accept-terms: false
     login:
         account-name: TUM
 version-control:
     url: http://localhost:8081
     user: root
     password: artemis_admin # created in Gitlab Server Quickstart step 2
     token: artemis-gitlab-token # generated in Gitlab Server Quickstart steps 4 and 5
 continuous-integration:
     user: artemis_admin
     password: artemis_admin
     url: http://localhost:8082
     secret-push-token: AQAAABAAAAAg/aKNFWpF9m2Ust7VHDKJJJvLkntkaap2Ka3ZBhy5XjRd8s16vZhBz4fxzd4TH8Su # pre-generated or replaced in Automated Jenkins Server step 3
     vcs-credentials: artemis_gitlab_admin_credentials
     artemis-authentication-token-key: artemis_notification_plugin_token
     artemis-authentication-token-value: artemis_admin
     build-timeout: 30
 git:
     name: Artemis
     email: artemis.in@tum.de
jenkins:
    internal-urls:
        ci-url: http://jenkins:8080
        vcs-url: http://gitlab:80
    use-crumb: false
server:
     port: 8080
     url: http://172.17.0.1:8080 # `http://host.docker.internal:8080` for Windows

In addition, you have to start Artemis with the profiles gitlab and jenkins so that the correct adapters will be used, e.g.:

--spring.profiles.active=dev,jenkins,gitlab,artemis,scheduling

Please read Server Setup for more details.

For a local setup on Windows you can use http://host.docker.internal appended by the chosen ports as the version-control and continuous-integration url.

Make sure to change the server.url value in application-dev.yml or application-prod.yml accordingly. This value will be used for the communication hooks from GitLab to Artemis and from Jenkins to Artemis. In case you use a different port than 80 (http) or 443 (https) for the communication, you have to append it to the server.url value, e.g. 127.0.0.1:8080.

When you start Artemis for the first time, it will automatically create an admin user.

Note: Sometimes Artemis does not generate the admin user which may lead to a startup error. You will have to create the user manually in the MySQL database and in GitLab. Make sure both are set up correctly and follow these steps:

  1. Use the tool mentioned above to generate a password hash.

  2. Connect to the database via a client like MySQL Workbench and execute the following query to create the user. Replace artemis_admin and HASHED_PASSWORD with your chosen username and password:

    INSERT INTO `artemis`.`jhi_user` (`id`,`login`,`password_hash`,`first_name`,`last_name`,`email`,
    `activated`,`lang_key`,`activation_key`,`reset_key`,`created_by`,`created_date`,`reset_date`,
    `last_modified_by`,`last_modified_date`,`image_url`,`last_notification_read`,`registration_number`)
    VALUES (1,"artemis_admin","HASHED_PASSWORD","artemis","administrator","artemis_admin@localhost",
    1,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
    
  3. Give the user admin and user roles:

    INSERT INTO `artemis`.`jhi_user_authority` (`user_id`, `authority_name`) VALUES (1,"ROLE_ADMIN");
    INSERT INTO `artemis`.`jhi_user_authority` (`user_id`, `authority_name`) VALUES (1,"ROLE_USER");
    

4. Create a user in Gitlab (http://your-gitlab-domain/admin/users/new) and make sure that the username and email are the same as the user from the database:

../../_images/gitlab_admin_user.png

5. Edit the new admin user (http://your-gitlab-domain/admin/users/artemis_admin/edit) to set the password to the same value as in the database:

../../_images/gitlab_admin_user_password.png

Starting the Artemis server should now succeed.

GitLab

GitLab Server Quickstart

The following steps describes how to set up the GitLab server in a semi-automated way. This is ideal as a quickstart for developers. For a more detailed setup, see Manual Gitlab Server Setup. In a production setup, you have to at least change the root password (by either specifying it in step 1 or extracting the random password in step 2) and generate random access tokens (instead of the pre-defined values). Set the variable GENERATE_ACCESS_TOKENS to true in the gitlab-local-setup.sh script and use the generated tokens instead of the predefined ones.

  1. Start the GitLab container defined in docker/gitlab-jenkins-mysql.yml by running

    GITLAB_ROOT_PASSWORD=QLzq3QvpD1Zbq7A1VWvw docker compose -f docker/<Jenkins setup to be launched>.yml up --build -d gitlab
    

    If you want to generate a random password for the root user, remove the part before docker compose from the command. GitLab passwords must not contain commonly used combinations of words and letters.

    The file uses the GITLAB_OMNIBUS_CONFIG environment variable to configure the Gitlab instance after the container is started. It disables prometheus monitoring, sets the ssh port to 2222, and adjusts the monitoring endpoint whitelist by default.

  2. Wait a couple of minutes since GitLab can take some time to set up. Open the instance in your browser (usually http://localhost:8081).

    You can then login using the username root and your password (which defaults to artemis_admin, if you used the command from above). If you did not specify the password, you can get the initial one using:

    docker compose -f docker/<Jenkins setup to be launched>.yml exec gitlab cat /etc/gitlab/initial_root_password
    
  3. Insert the GitLab root user password in the file application-local.yml (in src/main/resources) and insert the GitLab admin account. If you copied the template from above and used the default password, this is already done for you.

    artemis:
        version-control:
            url: http://localhost:8081
            user: root
            password: your.gitlab.admin.password # artemis_admin
    
  4. You now need to create an admin access token. You can do that using the following command (which takes a while to execute):

    docker compose -f docker/<Jenkins setup to be launched>.yml exec gitlab gitlab-rails runner "token = User.find_by_username('root').personal_access_tokens.create(scopes: ['api', 'read_api', 'read_user', 'read_repository', 'write_repository', 'sudo'], name: 'Artemis Admin Token', expires_at: 365.days.from_now); token.set_token('artemis-gitlab-token'); token.save!"
    
    You can also manually create in by navigating to http://localhost:8081/-/profile/personal_access_tokens?name=Artemis+Admin+token&scopes=api,read_api,read_user,read_repository,write_repository,sudo and generate a token with all scopes.
    Copy this token into the ADMIN_PERSONAL_ACCESS_TOKEN field in the docker/gitlab/gitlab-local-setup.sh file.
    If you used the command to generate the token, you don’t have to change the gitlab-local-setup.sh file.
  5. Adjust the GitLab setup by running, this will configure GitLab’s network setting to allow local requests:

    docker compose -f docker/<Jenkins setup to be launched>.yml exec gitlab /bin/sh -c "sh /gitlab-local-setup.sh"
    

    This script can also generate random access tokens, which should be used in a production setup. Change the variable $GENERATE_ACCESS_TOKENS to true to generate the random tokens and insert them into the Artemis configuration file.

  6. You’re done! Follow the Automated Jenkins Server Setup section for configuring Jenkins.

Manual GitLab Server Setup

GitLab provides no possibility to set a users password via API without forcing the user to change it afterwards (see Issue 19141). Therefore, you may want to patch the official gitlab docker image. Thus, you can use the following Dockerfile:

FROM gitlab/gitlab-ce:latest
RUN sed -i '/^.*user_params\[:password_expires_at\] = Time.current if admin_making_changes_for_another_user.*$/s/^/#/' /opt/gitlab/embedded/service/gitlab-rails/lib/api/users.rb

This Dockerfile disables the mechanism that sets the password to expired state after changed via API. If you want to use this custom image, you have to build the image and replace all occurrences of gitlab/gitlab-ce:latest in the following instructions by your chosen image name.

  1. Pull the latest GitLab Docker image (only if you don’t use your custom gitlab image)

    docker pull gitlab/gitlab-ce:latest
    
Start GitLab
  1. Run the image (and change the values for hostname and ports). Add -p 2222:22 if cloning/pushing via ssh should be possible. As GitLab runs in a docker container and the default port for SSH (22) is typically used by the host running Docker, we change the port GitLab uses for SSH to 2222. This can be adjusted if needed.

    Make sure to remove the comments from the command before running it.

    docker run -itd --name gitlab \
        --hostname your.gitlab.domain.com \   # Specify the hostname
        --restart always \
        -m 3000m \                            # Optional argument to limit the memory usage of Gitlab
        -p 8081:80 -p 443:443 \               # Alternative 1: If you are NOT running your own NGINX instance
        -p <some port of your choosing>:80 \  # Alternative 2: If you ARE running your own NGINX instance
        -p 2222:22 \                          # Remove this if cloning via SSH should not be supported
        -v gitlab_data:/var/opt/gitlab \
        -v gitlab_logs:/var/log/gitlab \
        -v gitlab_config:/etc/gitlab \
        gitlab/gitlab-ce:latest
    
  2. Wait a couple of minutes until the container is deployed and GitLab is set up, then open the instance in you browser. You can get the initial password for the root user using docker exec gitlab cat /etc/gitlab/initial_root_password.

  3. We recommend to rename the root admin user to artemis. To rename the user, click on the image on the top right and select Settings. Now select Account on the left and change the username. Use the same password in the Artemis configuration file application-artemis.yml

    artemis:
        version-control:
            user: artemis
            password: the.password.you.chose
    
  4. If you run your own NGINX or if you install Gitlab on a local development computer, then skip the next steps (6-7)

  5. Configure GitLab to automatically generate certificates using LetsEncrypt. Edit the GitLab configuration

    docker exec -it gitlab /bin/bash
    nano /etc/gitlab/gitlab.rb
    

    And add the following part

    letsencrypt['enable'] = true                          # GitLab 10.5 and 10.6 require this option
    external_url "https://your.gitlab.domain.com"         # Must use https protocol
    letsencrypt['contact_emails'] = ['gitlab@your.gitlab.domain.com'] # Optional
    
    nginx['redirect_http_to_https'] = true
    nginx['redirect_http_to_https_port'] = 80
    
  6. Reconfigure GitLab to generate the certificate.

    # Save your changes and finally run
    gitlab-ctl reconfigure
    

    If this command fails, try using

    gitlab-ctl renew-le-certs
    
  7. Login to GitLab using the Artemis admin account and go to the profile settings (upper right corner → Preferences)

    ../../_images/gitlab_preferences_button.png
GitLab Access Token
  1. Go to Access Tokens

../../_images/gitlab_access_tokens_button.png
  1. Create a new token named “Artemis” and give it rights api, read_api, read_user, read_repository, write_repository, and sudo.

../../_images/artemis_gitlab_access_token.png
  1. Copy the generated token and insert it into the Artemis configuration file application-artemis.yml

    artemis:
        version-control:
            token: your.generated.api.token
    
  2. (Optional, only necessary for local setup) Allow outbound requests to local network

    There is a known limitation for the local setup: webhook URLs for the communication between GitLab and Artemis and between GitLab and Jenkins cannot include local IP addresses. This option can be deactivated in GitLab on <https://gitlab-url>/admin/application_settings/network → Outbound requests. Another possible solution is to register a local URL, e.g. using ngrok, to be available over a domain the Internet.

  3. Adjust the monitoring-endpoint whitelist. Run the following command

    docker exec -it gitlab /bin/bash
    

    Then edit the GitLab configuration

    nano /etc/gitlab/gitlab.rb
    

    Add the following lines

    gitlab_rails['monitoring_whitelist'] = ['0.0.0.0/0']
    gitlab_rails['gitlab_shell_ssh_port'] = 2222
    

    This will disable the firewall for all IP addresses. If you only want to allow the server that runs Artemis to query the information, replace 0.0.0.0/0 with ARTEMIS.SERVER.IP.ADDRESS/32

    If you use SSH and use a different port than 2222, you have to adjust the port above.

  4. Disable prometheus. As we encountered issues with the Prometheus log files not being deleted and therefore filling up the disk space, we decided to disable Prometheus within GitLab. If you also want to disable prometheus, edit the configuration again using

    nano /etc/gitlab/gitlab.rb
    

    and add the following line

    prometheus_monitoring['enable'] = false
    

    The issue with more details can be found here.

  5. Add a SSH key for the admin user.

    Artemis can clone/push the repositories during setup and for the online code editor using SSH. If the SSH key is not present, the username + token will be used as fallback (and all git operations will use HTTP(S) instead of SSH).

    You first have to create a SSH key (locally), e.g. using ssh-keygen (more information on how to create a SSH key can be found e.g. at ssh.com or at gitlab.com).

    The list of supported ciphers can be found at Apache Mina.

    It is recommended to use a password to secure the private key, but it is not mandatory.

    Please note that the private key file must be named ìd_rsa, id_dsa, id_ecdsa or id_ed25519, depending on the ciphers used.

    You now have to extract the public key and add it to GitLab. Open the public key file (usually called id_rsa.pub (when using RSA)) and copy it’s content (you can also use cat id_rsa.pub to show the public key).

    Navigate to GITLAB-URL/-/profile/keys and add the SSH key by pasting the content of the public key.

    <ssh-key-path> is the path to the folder containing the id_rsa file (but without the filename). It will be used in the configuration of Artemis to specify where Artemis should look for the key and store the known_hosts file.

    <ssh-private-key-password> is the password used to secure the private key. It is also needed for the configuration of Artemis, but can be omitted if no password was set (e.g. for development environments).

  6. Reconfigure GitLab

    gitlab-ctl reconfigure
    

Upgrade GitLab

You can upgrade GitLab by downloading the latest Docker image and starting a new container with the old volumes:

docker stop gitlab
docker rename gitlab gitlab_old
docker pull gitlab/gitlab-ce:latest

See https://hub.docker.com/r/gitlab/gitlab-ce/ for the latest version. You can also specify an earlier one.

Note that upgrading to a major version may require following an upgrade path. You can view supported paths here.

Start a GitLab container just as described in Start-Gitlab and wait for a couple of minutes. GitLab should configure itself automatically. If there are no issues, you can delete the old container using docker rm gitlab_old and the olf image (see docker images) using docker rmi <old-image-id>. You can also remove all old images using docker image prune -a

Jenkins

Automated Jenkins Server Setup

The following steps describe how to deploy a pre-configured version of the Jenkins server. This is ideal as a quickstart for developers. For a more detailed setup, see Manual Jenkins Server Setup. In a production setup, you have to at least change the user credentials (in the file jenkins-casc-config.yml) and generate random access tokens and push tokens.

1. Create a new access token in GitLab named Jenkins and give it api and read_repository rights. You can do either do it manually or using the following command:

docker compose -f docker/<Jenkins setup to be launched>.yml exec gitlab gitlab-rails runner "token = User.find_by_username('root').personal_access_tokens.create(scopes: ['api', 'read_repository'], name: 'Jenkins', expires_at: 365.days.from_now); token.set_token('jenkins-gitlab-token'); token.save!"
  1. You can now first build and deploy Jenkins, then you can also start the other services which weren’t started yet:

    JAVA_OPTS=-Djenkins.install.runSetupWizard=false docker compose -f docker/<Jenkins setup to be launched>.yml up --build -d jenkins
    docker compose -f docker/<Jenkins setup to be launched>.yml up -d
    

    Jenkins is then reachable under http://localhost:8082/ and you can login using the credentials specified in jenkins-casc-config.yml (defaults to artemis_admin as both username and password).

  2. You need to generate the secret-push-token.

    As there is currently an open issue with the presets for Jenkins in Development environments, follow the steps described in Gitlab to Jenkins push notification token to generate the token. In a production setup, you should use a random master.key in the file gitlab-jenkins-mysql.yml.

  3. The application-local.yml must be adapted with the values configured in jenkins-casc-config.yml:

artemis:
    user-management:
        use-external: false
        internal-admin:
            username: artemis_admin
            password: artemis_admin
    version-control:
        url: http://localhost:8081
        user: artemis_admin
        password: artemis_admin
    continuous-integration:
        user: artemis_admin
        password: artemis_admin
        url: http://localhost:8082
        secret-push-token: # pre-generated or replaced in Automated Jenkins Server step 3
        vcs-credentials: artemis_gitlab_admin_credentials
        artemis-authentication-token-key: artemis_notification_plugin_token
        artemis-authentication-token-value: artemis_admin
  1. Open the src/main/resources/config/application-jenkins.yml and change the following: Again, if you are using a development setup, the template in the beginning of this page already contains the correct values.

jenkins:
    internal-urls:
        ci-url: http://jenkins:8080
        vcs-url: http://gitlab:80
  1. You’re done. You can now run Artemis with the GitLab/Jenkins environment.

Manual Jenkins Server Setup

  1. Pull the latest Jenkins LTS Docker image

    Run the following command to get the latest jenkins LTS docker image.

    docker pull jenkins/jenkins:lts
    
  2. Create a custom docker image

    In order to install and use Maven with Java in the Jenkins container, you have to first install maven, then download Java and finally configure Maven to use Java instead of the default version. You also need to install Swift and SwiftLint if you want to be able to create Swift programming exercises.

    To perform all these steps automatically, you can prepare a Docker image:

    Create a Dockerfile with the content found here <docker/jenkins/Dockerfile>. Copy it in a file named Dockerfile, e.g. in the folder /opt/jenkins/ using vim Dockerfile.

    Now run the command docker build --no-cache -t jenkins-artemis .

    This might take a while because Docker will download Java, but this is only required once.

  3. If you run your own NGINX or if you install Jenkins on a local development computer, then skip the next steps (4-7)

  4. Create a file increasing the maximum file size for the nginx proxy. The nginx-proxy uses a default file limit that is too small for the plugin that will be uploaded later. Skip this step if you have your own NGINX instance.

    echo "client_max_body_size 16m;" > client_max_body_size.conf
    
  5. The NGINX default timeout is pretty low. For plagiarism check and unlocking student repos for the exam a higher timeout is advisable. Therefore we write our own nginx.conf and load it in the container.

    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        fastcgi_read_timeout 300;
        proxy_read_timeout 300;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
        include /etc/nginx/conf.d/*.conf;
    }
    daemon off
    
  6. Run the NGINX proxy docker container, this will automatically setup all reverse proxies and force https on all connections. (This image would also setup proxies for all other running containers that have the VIRTUAL_HOST and VIRTUAL_PORT environment variables). Skip this step if you have your own NGINX instance.

    docker run -itd --name nginx_proxy \
        -p 80:80 -p 443:443 \
        --restart always \
        -v /var/run/docker.sock:/tmp/docker.sock:ro \
        -v /etc/nginx/certs \
        -v /etc/nginx/vhost.d \
        -v /usr/share/nginx/html \
        -v $(pwd)/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro \
        -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
        jwilder/nginx-proxy
    
  7. The nginx proxy needs another docker-container to generate letsencrypt certificates. Run the following command to start it (make sure to change the email-address). Skip this step if you have your own NGINX instance.

    docker run --detach \
        --name nginx_proxy-letsencrypt \
        --volumes-from nginx_proxy \
        --volume /var/run/docker.sock:/var/run/docker.sock:ro \
        --env "DEFAULT_EMAIL=mail@yourdomain.tld" \
        jrcs/letsencrypt-nginx-proxy-companion
    
Start Jenkins
  1. Run Jenkins by executing the following command (change the hostname and choose which port alternative you need)

    docker run -itd --name jenkins \
        --restart always \
        -v jenkins_data:/var/jenkins_home \
        -v /var/run/docker.sock:/var/run/docker.sock \
        -v /usr/bin/docker:/usr/bin/docker:ro \
        -e VIRTUAL_HOST=your.jenkins.domain -e VIRTUAL_PORT=8080 \    # Alternative 1: If you are NOT using a separate NGINX instance
        -e LETSENCRYPT_HOST=your.jenkins.domain \                     # Only needed if Alternative 1 is used
        -p 8082:8080 \                                                # Alternative 2: If you ARE using a separate NGINX instance OR you ARE installing Jenkins on a local development computer
        -u root \
        jenkins/jenkins:lts
    

    If you still need the old setup with Python & Maven installed locally, use jenkins-artemis instead of jenkins/jenkins:lts. Also note that you can omit the -u root, -v /var/run/docker.sock:/var/run/docker.sock and -v /usr/bin/docker:/usr/bin/docker:ro parameters, if you do not want to run Docker builds on the Jenkins controller (but e.g. use remote agents).

  2. Open Jenkins in your browser (e.g. localhost:8082) and setup the

    admin user account (install all suggested plugins). You can get the initial admin password using the following command.

    # Jenkins highlights the password in the logs, you can't miss it
    docker logs -f jenkins
    or alternatively
    docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
    
  3. Set the chosen credentials in the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            user: your.chosen.username
            password: your.chosen.password
    

Required Jenkins Plugins

Note: The custom Jenkins Dockerfile takes advantage of the Plugin Installation Manager Tool for Jenkins to automatically install the plugins listed below. If you used the Dockerfile, you can skip these steps and Server Notification Plugin. The list of plugins is maintained in docker/jenkins/plugins.yml.

You will need to install the following plugins (apart from the recommended ones that got installed during the setup process):

  1. GitLab for enabling webhooks to and from GitLab

  2. Timestamper for adding the time to every line of the build output (Timestamper might already be installed)

  3. Pipeline for defining the build description using declarative files (Pipeline might already be installed)

    Note: This is a suite of plugins that will install multiple plugins

  4. Pipeline Maven to use maven within the pipelines. If you want to use Docker for your build agents you may also need to install Docker Pipeline .

  5. Matrix Authorization Strategy Plugin for configuring permissions for users on a project and build plan level (Matrix Authorization Strategy might already be installed).

The plugins above (and the pipeline-setup associated with it) got introduced in Artemis 4.7.3. If you are using exercises that were created before 4.7.3, you also have to install these plugins:

Please note that this setup is deprecated and will be removed in the future. Please migrate to the new pipeline-setup if possible.

  1. Multiple SCMs for combining the exercise test and assignment repositories in one build

  2. Post Build Task for preparing build results to be exported to Artemis

  3. Xvfb for exercises based on GUI libraries, for which tests have to have some virtual display

Choose “Download now and install after restart” and checking the “Restart Jenkins when installation is complete and no jobs are running” box

Timestamper Configuration

Go to Manage Jenkins → System Configuration → Configure. There you will find the Timestamper configuration, use the following value for both formats:

'<b>'yyyy-MM-dd'T'HH:mm:ssX'</b> '
../../_images/timestamper_config.png

Server Notification Plugin

Artemis needs to receive a notification after every build, which contains the test results and additional commit information. For that purpose, we developed a Jenkins plugin, that can aggregate and POST JUnit formatted results to any URL.

You can download the current release of the plugin here (Download the .hpi file). Go to the Jenkins plugin page (Manage Jenkins → System Configuration → Plugins) and install the downloaded file under the Advanced settings tab under Deploy Plugin

../../_images/jenkins_custom_plugin.png

Jenkins Credentials

Go to Manage Jenkins → Security → Credentials → Jenkins → Global credentials and create the following credentials

GitLab API Token
  1. Create a new access token in GitLab named Jenkins and give it api rights and read_repository rights. For detailed instructions on how to create such a token follow Gitlab Access Token.

    ../../_images/gitlab_jenkins_token_rights.png
  2. Copy the generated token and create new Jenkins credentials:

    1. Kind: GitLab API token

    2. Scope: Global

    3. API token: your.copied.token

    4. Leave the ID field blank

    5. The description is up to you

  3. Go to the Jenkins settings Manage Jenkins → System. There you will find the GitLab settings. Fill in the URL of your GitLab instance and select the just created API token in the credentials dropdown. After you click on “Test Connection”, everything should work fine. If you have problems finding the right URL for your local docker setup, you can try http://host.docker.internal:8081 for Windows or http://docker.for.mac.host.internal:8081 for Mac if GitLab is reachable over port 8081.

    ../../_images/jenkins_gitlab_configuration.png
Server Notification Token
  1. Create a new Jenkins credential containing the token, which gets send by the server notification plugin to Artemis with every build result:

    1. Kind: Secret text

    2. Scope: Global

    3. Secret: your.secret_token_value (choose any value you want, copy it for the nex step)

    4. Leave the ID field blank

    5. The description is up to you

  2. Copy the generated ID of the new credentials and put it into the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            artemis-authentication-token-key: the.id.of.the.notification.token.credential
    
  3. Copy the actual value you chose for the token and put it into the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            artemis-authentication-token-value: the.actual.value.of.the.notification.token
    
GitLab Repository Access
  1. Create a new Jenkins credentials containing the username and password of the GitLab administrator account:

    1. Kind: Username with password

    2. Scope: Global

    3. Username: the_username_you_chose_for_the_gitlab_admin_user

    4. Password: the_password_you_chose_for_the_gitlab_admin_user

    5. Leave the ID field blank

    6. The description is up to you

  2. Copy the generated ID (e.g. ea0e3c08-4110-4g2f-9c83-fb2cdf6345fa) of the new credentials and put it into the Artemis configuration file application-artemis.yml

    artemis:
        continuous-integration:
            vcs-credentials: the.id.of.the.username.and.password.credentials.from.jenkins
    

GitLab to Jenkins push notification token

GitLab has to notify Jenkins build plans if there are any new commits to the repository. The push notification that gets sent here is secured by a token generated by Jenkins. In order to get this token, you have to do the following steps:

  1. Create a new item in Jenkins (use the Freestyle project type) and name it TestProject

  2. In the project configuration, go to Build Triggers → Build when a change is pushed to GitLab and activate this option

  3. Click on Advanced.

  4. You will now have a couple of new options here, one of them being a “Secret token”.

  5. Click on the “Generate” button right below the text box for that token.

  6. Copy the generated value, let’s call it $gitlab-push-token

  7. Apply these change to the plan (i.e. click on Apply)

../../_images/jenkins_test_project.png
  1. Perform a GET request to the following URL (e.g. with Postman) using Basic Authentication and the username and password you chose for the Jenkins admin account:

    GET https://your.jenkins.domain/job/TestProject/config.xml
    

    If you have xmllint installed, you can use this command, which will output the secret-push-token from steps 9 and 10 (you may have to adjust the username and password):

    curl -u artemis_admin:artemis_admin http://localhost:8082/job/TestProject/config.xml | xmllint --nowarning --xpath "//project/triggers/com.dabsquared.gitlabjenkins.GitLabPushTrigger/secretToken/text()" - | sed 's/^.\(.*\).$/\1/'
    
  2. You will get the whole configuration XML of the just created build plan, there you will find the following tag:

    <secretToken>{$some-long-encrypted-value}</secretToken>
    
../../_images/jenkins_project_config_xml.png

Job configuration XML

  1. Copy the secret-push-token value in the line <secretToken>{secret-push-token}</secretToken>. This is the encrypted value of the gitlab-push-token you generated in step 5.

  2. Now, you can delete this test project and input the following values into your Artemis configuration application-artemis.yml (replace the placeholders with the actual values you wrote down)

    artemis:
        continuous-integration:
            secret-push-token: $some-long-encrypted-value
    
  3. In a local setup, you have to disable CSRF otherwise some API endpoints will return HTTP Status 403 Forbidden. This is done be executing the following command: docker compose -f docker/<Jenkins setup to be launched>.yml exec -T jenkins dd of=/var/jenkins_home/init.groovy < docker/jenkins/jenkins-disable-csrf.groovy

    The last step is to disable the use-crumb option in application-local.yml:

    jenkins:
        use-crumb: false
    

Upgrading Jenkins

In order to upgrade Jenkins to a newer version, you need to rebuild the Docker image targeting the new version. The stable LTS versions can be viewed through the changelog and the corresponding Docker image can be found on dockerhub.

  1. Open the Jenkins Dockerfile and replace the value of FROM with jenkins/jenkins:lts. After running the command docker pull jenkins/jenkins:lts, this will use the latest LTS version in the following steps. You can also use a specific LTS version. For example, if you want to upgrade Jenkins to version 2.289.2, you will need to use the jenkins/jenkins:2.289.2-lts image.

  2. If you’re using docker compose, you can simply use the following command and skip the next steps.

    docker compose -f docker/<Jenkins setup to be launched>.yml up --build -d
    
  3. Build the new Docker image:

    docker build --no-cache -t jenkins-artemis .
    

    The name of the image is called jenkins-artemis.

  4. Stop the current Jenkins container (change jenkins to the name of your container):

    docker stop jenkins
    
  5. Rename the container to jenkins_old so that it can be used as a backup:

    docker rename jenkins jenkins_old
    
  6. Run the new Jenkins instance:

    docker run -itd --name jenkins --restart always \
     -v jenkins_data:/var/jenkins_home \
     -v /var/run/docker.sock:/var/run/docker.sock \
     -p 9080:8080 jenkins-artemis \
    
  7. You can remove the backup container if it’s no longer needed:

    docker rm jenkins_old
    

You should also update the Jenkins plugins regularly due to security reasons. You can update them directly in the Web User Interface in the Plugin Manager.

Build agents

You can either run the builds locally (that means on the machine that hosts Jenkins) or on remote build agents.

Configuring local build agents

Go to Manage JenkinsNodesBuilt-In NodeConfigure

Configure your master node like this (adjust the number of executors, if needed). Make sure to add the docker label.

../../_images/jenkins_local_node.png

Jenkins local node

Alternative local build agents setup using docker

An alternative way of adding a build agent that will use docker (similar to the remote agents below) but running locally, can be done using the jenkins/ssh-agent docker image docker image.

Prerequisites:

  1. Make sure to have Docker installed

Agent setup:

  1. Create a new SSH key using ssh-keygen (if a passphrase is added, store it for later)

  2. Copy the public key content (e.g. in ~/.ssh/id_rsa.pub)

  3. Run:

    docker run -d --name jenkins_agent -v /var/run/docker.sock:/var/run/docker.sock \
    jenkins/ssh-agent:latest "<copied_public_key>"
    
  4. Get the GID of the ‘docker’ group with cat /etc/groups and remember it for later

  5. Enter the agent’s container with docker exec -it jenkins_agent bash

  6. Install Docker with apt update && apt install docker.io

  7. Check if group ‘docker’ already exists with cat /etc/groups. If yes, remove it with groupdel docker

  8. Add a new ‘docker’ group with the same GID as seen in point 2 with groupadd -g <GID> docker

  9. Add ‘jenkins’ user to the group with usermod -aG docker jenkins

  10. Activate changes with newgrp docker

  11. Now check if ‘jenkins’ has the needed permissions to run docker commands

    1. Log in as ‘jenkins’ with su jenkins

    2. Try if docker inspect <agent_container_name> works or if a permission error occurs

    3. If an permission error occurs, try to restart the docker container

  12. Now you can exit the container executing exit twice (the first will exit the jenkins user and the second the container)

Add agent in Jenkins:

  1. Open Jenkins in your browser (e.g. localhost:8082)

  2. Go to Manage Jenkins → Credentials → System → Global credentials (unrestricted) → Add Credentials

    • Kind: SSH Username with private key

    • Scope: Global (Jenkins, nodes, items, all child items, etc)

    • ID: leave blank

    • Description: Up to you

    • Username: jenkins

    • Private Key: <content of the previously generated private key> (e.g /root/.ssh/id_rsa)

    • Passphrase: <the previously entered passphrase> (you can leave it blank if none has been specified)

    ../../_images/alternative_jenkins_node_credentials.png
  3. Go to Manage Jenkins → Nodes → New Node

    • Node name: Up to you (e.g. Docker agent node)

    • Check ‘Permanent Agent’

    ../../_images/alternative_jenkins_node_setup.png
  4. Node settings:

    • # of executors: Up to you (e.g. 4)

    • Remote root directory: /home/jenkins/agent

    • Labels: docker

    • Usage: Only build jobs with label expressions matching this node

    • Launch method: Launch agents via SSH

    • Host: output of command docker inspect --format '{{ .Config.Hostname }}' jenkins_agent

    • Credentials: <the previously created SSH credential>

    • Host Key Verification Strategy: Non verifying Verification Strategy

    • Availability: Keep this agent online as much as possible

    ../../_images/alternative_jenkins_node.png
  5. Save the new node

  6. Node should now be up and running

Installing remote build agents

You might want to run the builds on additional Jenkins agents, especially if a large amount of students should use the system at the same time. Jenkins supports remote build agents: The actual compilation of the students submissions happens on these other machines but the whole process is transparent to Artemis.

This guide explains setting up a remote agent on an Ubuntu virtual machine that supports docker builds.

Prerequisites: 1. Install Docker on the remote machine: https://docs.docker.com/engine/install/ubuntu/

  1. Add a new user to the remote machine that Jenkins will use: sudo adduser --disabled-password --gecos "" jenkins

  2. Add the jenkins user to the docker group (This allows the jenkins user to interact with docker): sudo usermod -a -G docker jenkins

  3. Generate a new SSH key locally (e.g. using ssh-keygen) and add the public key to the .ssh/authorized_keys file of the jenkins user on the agent VM.

  4. Validate that you can connect to the build agent machine using SSH and the generated private key and validate that you can use docker (docker ps should not show an error)

  5. Log in with your normal account on the build agent machine and install Java: sudo apt install default-jre

  6. Add a new secret in Jenkins, enter private key you just generated and add the passphrase, if set:

    ../../_images/jenkins_ssh_credentials.png

    Jenkins SSH Credentials

  7. Add a new node (select a name and select Permanent Agent): Set the number of executors so that it matches your machine’s specs: This is the number of concurrent builds this agent can handle. It is recommended to match the number of cores of the machine, but you might want to adjust this later if needed.

    Set the remote root directory to /home/jenkins/remote_agent.

    Set the usage to Only build jobs with label expressions matching this node. This ensures that only docker-jobs will be built on this agent, and not other jobs.

    Add a label docker to the agent.

    Set the launch method to Launch via SSH and add the host of the machine. Select the credentials you just created and select Manually trusted key Verification Strategy as Host key verification Strategy. Save it.

    ../../_images/jenkins_node.png

    Add a Jenkins node

  8. Wait for some moments while jenkins installs it’s remote agent on the agent’s machine. You can track the progress using the Log page when selecting the agent. System information should also be available.

  9. Change the settings of the master node to be used only for specific jobs. This ensures that the docker tasks are not executed on the master agent but on the remote agent.

../../_images/jenkins_master_node.png

Adjust Jenkins master node settings

  1. You are finished, the new agent should now also process builds.

Jenkins User Management

Artemis supports user management in Jenkins as of version 4.11.0. Creating an account in Artemis will also create an account on Jenkins using the same password. This enables users to login and access Jenkins. Updating and/or deleting users from Artemis will also lead to updating and/or deleting from Jenkins.

Unfortunately, Jenkins does not provide a Rest API for user management which present the following caveats:

  • The username of a user is treated as a unique identifier in Jenkins.

  • It’s not possible to update an existing user with a single request. We update by deleting the user from Jenkins and recreating it with the updated data.

  • In Jenkins, users are created in an on-demand basis. For example, when a build is performed, its change log is computed and as a result commits from users who Jenkins has never seen may be discovered and created.

  • Since Jenkins users may be re-created automatically, issues may occur such as 1) creating a user, deleting it, and then re-creating it and 2) changing the username of the user and reverting back to the previous one.

  • Updating a user will re-create it in Jenkins and therefore remove any additionally saved Jenkins-specific user data such as API access tokens.

Jenkins Build Plan Access Control Configuration

Artemis takes advantage of the Project-based Matrix Authorization Strategy plugin to support build plan access control in Jenkins. This enables specific Artemis users to access build plans and execute actions such as triggering a build. This section explains the changes required in Jenkins in order to set up build plan access control:

  1. Navigate to Manage Jenkins → Plugins → Installed plugins and make sure that you have the Matrix Authorization Strategy plugin installed

  2. Navigate to Manage Jenkins → Security and navigate to the “Authorization” section

  3. Select the “Project-based Matrix Authorization Strategy” option

  4. In the table make sure that the “Read” permission under the “Overall” section is assigned to the “Authenticated Users” user group.

  5. In the table make sure that all “Administer” permission is assigned to all administrators.

  6. You are finished. If you want to fine-tune permissions assigned to teaching assistants and/or instructors, you can change them within the JenkinsJobPermission.java file.

../../_images/jenkins_authorization_permissions.png

Caching

You can configure caching for e.g. Maven repositories. See Programming Exercise adjustments for more details.

Separate NGINX Configurations

There are some placeholders in the following configurations. Replace them with your setup specific values ### GitLab

server {
    listen 443 ssl http2;
    server_name your.gitlab.domain;
    ssl_session_cache shared:GitLabSSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:<your exposed GitLab HTTP port (default 80)>;
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_http_version      1.1;
        proxy_redirect          http://         https://;

        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;

        gzip off;
    }
}

Jenkins

server {
    listen 443 ssl http2;
    server_name your.jenkins.domain;
    ssl_session_cache shared:JenkinsSSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:<your exposed Jenkins HTTP port (default 8081)>;
        proxy_set_header        Host                $host:$server_port;
        proxy_set_header        X-Real-IP           $remote_addr;
        proxy_set_header        X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto   $scheme;
        proxy_redirect          http://             https://;

        # Required for new HTTP-based CLI
        proxy_http_version 1.1;
        proxy_request_buffering off;
        proxy_buffering off; # Required for HTTP-based CLI to work over SSL

        # workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
        add_header 'X-SSH-Endpoint' 'your.jenkins.domain.com:50022' always;
    }

    error_page 502 /502.html;
    location /502.html {
        root /usr/share/nginx/html;
        internal;
    }
}

/etc/nginx/common/common_ssl.conf

If you haven’t done so, generate the DH param file: sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096

ssl_certificate     <path to your fullchain certificate>;
ssl_certificate_key <path to the private key of your certificate>;
ssl_protocols       TLSv1.2 TLSv1.3;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_prefer_server_ciphers   on;
ssl_ciphers ECDH+CHACHA20:EECDH+AESGCM:EDH+AESGCM:!AES128;
ssl_ecdh_curve secp384r1;
ssl_session_timeout  10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver <if you have any, specify them here> valid=300s;
resolver_timeout 5s;

GitLab CI and GitLab Setup

This section describes how to set up a programming exercise environment based on GitLab CI and GitLab.

Note

Depending on your operating system, it might not work with the predefined values (host.docker.internal). Therefore, it might be necessary to adapt these with e.g. your local IP address.

Prerequisites:

GitLab

This section describes how to set up a development environment for Artemis with GitLab and GitLab CI. The same basic steps as for a GitLab and Jenkins setup apply, but the steps that describe generating tokens for Jenkins can be skipped. For a production setup of GitLab, also see the documentation of the GitLab and Jenkins setup.

GitLab

  1. Depending on your operating system, it is necessary to update the host file of your machine to include the following line:

    127.0.0.1       host.docker.internal
    ::1             host.docker.internal
    
  2. Configure GitLab
    cp docker/env.example.gitlab-gitlabci.txt docker/.env
    
  3. Start GitLab and the GitLab Runner
    docker-compose -f docker/gitlab-gitlabci.yml --env-file docker/.env up --build -d
    
  4. Get your GitLab root password
    docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password
    
  5. Generate an access token

    Go to http://host.docker.internal/-/profile/personal_access_tokens and generate an access token with all scopes. This token is used in the Artemis configuration as artemis.version-control.token.

  6. Allow outbound requests to local network

    For setting up the webhook between Artemis and GitLab, it is necessary to allow requests to the local network. Go to http://host.docker.internal/admin/application_settings/network and allow the outbound requests. More information about this aspect can be found in the GitLab setup instructions (step 12).

GitLab Runner

  1. Register a new runner

    Login to your GitLab instance and open http://host.docker.internal/admin/runners. Click on Register an instance runner and copy the registration token.

    Then execute this command with the registration token:

    docker exec -it gitlab-runner gitlab-runner register \
    --non-interactive \
    --executor "docker" \
    --docker-image alpine:latest \
    --url http://host.docker.internal:80 \
    --registration-token "PROJECT_REGISTRATION_TOKEN" \
    --description "docker-runner" \
    --maintenance-note "Test Runner" \
    --tag-list "docker,artemis" \
    --run-untagged="true" \
    --locked="false" \
    --access-level="not_protected"
    

    You should now find the runner in the list of runners (http://host.docker.internal/admin/runners)

Note

Adding a runner in a production setup works the same way. The GitLab administration page also contains alternative ways of setting up GitLab runners. All variants should allow the passing of the configuration options tag-list, run-untagged, locked, and access-level similarly as in the Docker command above. If forgotten, Artemis might not use this runner to run the tests for exercise submissions.

Artemis

Note

Make sure that the database is empty and contains no data from previous Artemis runs.

  1. Generate authentication token

    The notification plugin has to authenticate to upload the test results. Therefore, a random string has to be generated, e.g., via a password generator. This should be used in place of notification-plugin-token value in the example config below.

  2. Configure Artemis

    For local development, copy the following configuration into the application-local.yml file and adapt it with the values from the previous steps.

    artemis:
        user-management:
            use-external: false
            internal-admin:
                username: artemis_admin
                password: gHn7JlggD9YPiarOEJSx19EFp2BDkkq9
            login:
                account-name: TUM
        version-control:
            url: http://host.docker.internal:80
            user: root
            password: password # change this value
            token: gitlab-personal-access-token # change this value
        continuous-integration:
            build-timeout: 30
            artemis-authentication-token-value: notification-plugin-token # change this value
        git:
            name: Artemis
            email: artemis.in@tum.de
    server:
        url: http://host.docker.internal:8080
    

Note

In GitLab, the password of a user must not be the same as the username and must fulfill specific requirements. Therefore, there is a random password in the example above.

  1. Start Artemis

    Start Artemis with the gitlab and gitlabci profile.


Local CI and local VC setup

This section describes how to set up a programming exercise environment based on the local CI and local VC systems. These two systems are integrated into the Artemis server application and thus the setup is greatly simplified compared to the external options. This also reduces system requirements as you do not have to run any systems in addition to the Artemis server. For now, this setup is only recommended for development and testing purposes. If you are setting Artemis up for the first time, these are the steps you should follow:

You can see the configuration in the following video:

Configure Artemis

Create a file src/main/resources/config/application-local.yml with the following content:

artemis:
    user-management:
        use-external: false # if you do not wish to use Jira for user management
    version-control:
        url: http://localhost:8080

The values configured here are sufficient for a basic Artemis setup that allows for running programming exercises with the local VC and local CI systems.

Hint

If you are running Artemis in Windows, you also need to add a property artemis.continuous-integration.docker-connection-uri with the value tcp://localhost:2375. The default value for this property is unix:///var/run/docker.sock which is defined in src/main/resources/config/application-localci.yml. The application-local.yml file overrides the values set there. When using Windows, you will also need make sure to enable the Docker setting “Expose daemon on tcp://localhost:2375 without TLS”. You can do this in Docker Desktop under Settings > General.

When you start Artemis for the first time, it will automatically create an admin user called “artemis_admin”. If this does not work, refer to the guide for the Jenkins and GitLab Setup to manually create an admin user in the database. You can then use that admin user to create further users in Artemis’ internal user management system.

Configure Jira

The local CI and local VC systems work fine without external user management configured so this step is optional. Setting up Jira allows you to run a script that sets up a number of users and groups for you.

If you have already set up your system with Bamboo, Bitbucket, and Jira, you can keep using Jira for user management. Just stop the Bamboo and Bitbucket containers. If you want to use Jira for user management, but have not configured it yet, refer to the guide for the Bamboo, Bitbucket and Jira Setup. You can follow all steps to set up the entire Atlassian stack, or just get the license for Jira and only follow steps 1-3 leaving out the setup of the Bamboo and Bitbucket containers. You can stop and remove the Bamboo and Bitbucket containers or just stop them in case you want to set them up later on.

You also need to configure further settings in the src/main/resources/config/application-local.yml properties:

artemis:
    user-management:
        use-external: true
        external:
            url: http://localhost:8081
            user:  <jira-admin-user> # insert the admin user you created in Jira
            password: <jira-admin-password> # insert the admin user's password
            admin-group-name: instructors

Start Artemis

Start Artemis with the profiles localci and localvc so that the correct adapters will be used, e.g.:

--spring.profiles.active=dev,localci,localvc,artemis,scheduling,local

All of these profiles are enabled by default when using the Artemis (Server, LocalVC & LocalCI) run configuration in IntelliJ. Add jira to the list of profiles if you want to use Jira for user management: dev,localci,localvc,artemis,scheduling,local,jira Please read Server Setup for more details.

Test the Setup

You can now test the setup:

  • Create a course and a programming exercise.

  • Log in as a student registered for that course and participate in the programming exercise, either from the online editor or by cloning the repository and pushing from your local environment.

  • Make sure that the result of your submission is displayed in the Artemis UI.

Hint

At the moment, the local VC system only supports accessing repositories via HTTP(S) and Basic Auth. We plan to add SSH support in the future. For now, you need to enter your Artemis credentials (username and password) when accessing template, solution, test, and assignment repositories.

For unauthorized access, your Git client will display the respective error message:

Setup with Docker Compose

You can also use Docker Compose to set up the local CI and local VC systems. Using the following command, you can start the Artemis and MySQL containers:

docker compose -f docker/artemis-dev-local-vc-local-ci-mysql.yml up

Hint

Unix systems: When running the Artemis container on a Unix system, you will have to give the user running the container permission to access the Docker socket by adding them to the docker group. You can do this by changing the value of services.artemis-app.group_add in the docker/artemis-dev-local-vc-local-ci-mysql.yml file to the group ID of the docker group on your system. You can find the group ID by running getent group docker | cut -d: -f3. The default value is 999.

Windows: If you want to run the Docker containers locally on Windows, you will have to change the value for the Docker connection URI. You can add ARTEMIS_CONTINUOUSINTEGRATION_DOCKERCONNECTIONURI="tcp://host.docker.internal:2375" to the environment file, found in docker/artemis/config/dev-local-vc-local-ci.env. This overwrites the default value unix:///var/run/docker.sock for this property defined in src/main/resources/config/application-docker.yml.


Hermes Service

Push notifications for the mobile Android and iOS clients rely on the Hermes service. To enable push notifications the Hermes service needs to be started separately and the configuration of the Artemis instance must be extended.

Configure and start Hermes

To run Hermes, you need to clone the repository and replace the placeholders within the docker-compose file.

The following environment variables need to be updated for push notifications to Apple devices:

  • APNS_CERTIFICATE_PATH: String - Path to the APNs certificate .p12 file as described here

  • APNS_CERTIFICATE_PWD: String - The APNS certificate password

  • APNS_PROD_ENVIRONMENT: Bool - True if it should use the Production APNS Server (Default false)

Furthermore, the <APNS_Key>.p12 needs to be mounted into the Docker under the above specified path.

To run the services for Android support the following environment variable is required:

  • GOOGLE_APPLICATION_CREDENTIALS: String - Path to the firebase.json

Furthermore, the Firebase.json needs to be mounted into the Docker under the above specified path.

To run both APNS and Firebase, configure the environment variables for both.

To start Hermes, run the docker compose up command in the folder where the docker-compose file is located.

Artemis Configuration

The Hermes service is running on a dedicated machine and is addressed via HTTPS. We need to extend the Artemis configuration in the file src/main/resources/config/application-artemis.yml like:

artemis:
  # ...
 push-notification-relay: <url>

Athena Service

The semi-automatic text assessment relies on the Athena service. To enable automatic text assessments, special configuration is required:

Enable the athena Spring profile:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,athena

Configure API Endpoints:

The Athena service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:

artemis:
  # ...
  athena:
      url: http://localhost:5000
      secret: abcdef12345

The secret can be any string. For more detailed instructions on how to set it up in Athena, refer to the Athena documentation.


Apollon Service

The Apollon Converter is needed to convert models from their JSON representaiton to PDF. Special configuration is required:

Enable the apollon Spring profile:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,apollon

Configure API Endpoints:

The Apollon conversion service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:

apollon:
   conversion-service-url: http://localhost:8080

Iris/Pyris Service

Iris is an intelligent virtual tutor integrated into the Artemis platform. It is designed to provide one-on-one programming assistance without human tutors. The core technology of Iris is based on Generative AI and Large Language Models, like OpenAI’s GPT.

Iris also powers other smart features in Artemis, like the automatic generation of descriptions for hints.

This section outlines how to set up IRIS in your own Artemis instance.

Prerequisites

  • Ensure you have a running instance of Artemis.

  • Set up a running instance of Pyris. Refer to the Pyris Setup Guide for more information.

Enable the iris Spring profile:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,iris

Configure Pyris API Endpoints:

The Pyris service is running on a dedicated machine and is addressed via HTTP(s). We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:

artemis:
  # ...
  iris:
      url: http://localhost:8000
      secret: abcdef12345

The secret can be any string. For more detailed instructions on how to set it up in Pyris, refer to the Pyris Setup Guide.


Common Setup Problems

General Setup Problems

  • Restarting IntelliJ with invalidated caches (File > Invalidate Caches…) might resolve the current issue.

  • When facing issues with deep dependencies and changes were made to the package.json file, executing npm install --force might resolve the issue.

  • When encountering a compilation error due to invalid source release make sure that you have set the Java version properly at 3 places

    • File > Project Structure > Project Settings > Project > Project SDK

    • File > Project Structure > Project Settings > Project > Project Language Level

    • File > Settings > Build, Execution, Deployment > Build Tools > Gradle > Gradle JVM

Database

  • On the first startup, there might be issues with the text_block table. You can resolve the issue by executing ALTER TABLE text_block CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; in your database.

  • One typical problem in the development setup is that an exception occurs during the database initialization. Artemis uses Liquibase to automatically upgrade the database scheme after the data model has changed. This ensures that the changes can also be applied to the production server. In case you encounter errors with Liquibase checksum values:

    • Run the following command in your terminal / command line: ./gradlew liquibaseClearChecksums

    • You can manually adjust the checksum for a breaking changelog: UPDATE `DATABASECHANGELOG` SET `MD5SUM` = NULL WHERE `ID` = '<changelogId>'

Client

  • If you are using a machine with limited RAM (e.g. ~8 GB RAM) you might have issues starting the Artemis Client. You can resolve this by following the description in Using the command line

Programming Exercise Setup

Atlassian Setup (Bamboo, Bitbucket and Jira)

  • When setting up the Bamboo, Bitbucket, and Jira, at the same time within the same browser, you might receive the message that the Jira token expired. You can resolve the issue by using another browser for configuring Jira, as there seems to be a synchronization problem within the browser.

  • When you create a new programming exercise and receive the error message The project <ProgrammingExerciseName> already exists in the CI Server. Please choose a different short name! and you have double checked that this project does not exist within the CI Server Bamboo, you might have to renew the trial licenses for the Atlassian products.

    Update Atlassian Licenses You need to create new Atlassian Licenses, which requires you to retrieve the server id and navigate to the license editing page after creating new trial licenses.
    • Bamboo: Retrieve the Server ID and edit the license in License key details (Administration > Licensing)
    • Bitbucket: Retrieve the Server ID and edit the license in License Settings (Administration > Licensing)
    • Jira: Retrieve the Server ID (System > System info) and edit the JIRA Service Desk License key in Versions & licenses
  • When you push new code to Bitbucket (from your local machine or from the online code editor), for a Java or a Kotlin exercise and no result is being displayed in Artemis, check the corresponding Bamboo build plan. If the plan failed and it says “No failed test found. A possible compilation error occurred.”, then check the logs.

    • If it says ./gradlew: Permission denied, then go to the build plan configuration (Actions -> Configure plan -> Default Job) and add chmod +x gradlew to the “Tests” Script before the ./gradlew clean test line.

    • If it says Execution failed for task ':compileJava'. > invalid source release: 17, then change the ./gradlew clean test command in the build configuration to ./gradlew clean test -Dorg.gradle.java.home=/usr/lib/jvm/java-17-oracle (pointing to the Java installation that you added as a server capability).


Multiple Artemis instances

Setup with one instance

Artemis usually runs with one instance of the application server:

../../_images/deployment_before.drawio.png

Setup with multiple instances

There are certain scenarios, where a setup with multiple instances of the application server is required. This can e.g. be due to special requirements regarding fault tolerance or performance.

Artemis also supports this setup (which is also used at the Chair for Applied Software Engineering at TUM).

Multiple instances of the application server are used to distribute the load:

../../_images/deployment_after_simple.drawio.png

A load balancer (typically a reverse proxy such as nginx) is added, that distributes the requests to the different instances.

Note: This documentation focuses on the practical setup of this distributed setup. More details regarding the theoretical aspects can be found in the Bachelor’s Thesis Securing and Scaling Artemis WebSocket Architecture, which can be found here: pdf.

Additional synchronization

All instances of the application server use the same database, but other parts of the system also have to be synchronized:

  1. Database cache

  2. WebSocket messages

  3. File system

Each of these three aspects is synchronized using a different solution

Database cache

Artemis uses a cache provider that supports distributed caching: Hazelcast.

All instances of Artemis form a so-called cluster that allows them to synchronize their cache. You can use the configuration argument spring.hazelcast.interface to configure the interface on which Hazelcast will listen.

../../_images/deployment_hazelcast.drawio.png

One problem that arises with a distributed setup is that all instances have to know each other in order to create this cluster. This is problematic if the instances change dynamically. Artemis uses a discovery service to solve the issue (named JHipster Registry).

Discovery service

JHipster registry contains Eureka, the discovery service where all instances can register themselves and fetch the other registered instances.

Eureka can be configured like this within Artemis:

# Eureka configuration
eureka:
    client:
        enabled: true
        service-url:
            defaultZone: {{ artemis_eureka_urls }}
instance:
    prefer-ip-address: true
    ip-address: {{ artemis_ip_address }}
    appname: Artemis
    instanceId: Artemis:{{ artemis_eureka_instance_id }}

logging:
    file:
        name: '/opt/artemis/artemis.log'

{{ artemis_eureka_urls }} must be the URL where Eureka is reachable, {{ artemis_ip_address }} must be the IP under which this instance is reachable and {{ artemis_eureka_instance_id }} must be a unique identifier for this instance. You also have to setup the value jhipster.registry.password to the password of the registry (which you will set later).

Note that Hazelcast (which requires Eureka) is by default binding to 127.0.0.1 to prevent other instances to form a cluster without manual intervention. If you set up the cluster on multiple machines (which you should do for a production setup), you have to set the value spring.hazelcast.interface to the ip-address of the machine. Hazelcast will then bind on this interface rather than 127.0.0.1, which allows other instances to establish connections to the instance. This setting must be set for every instance, but you have to make sure to adjust the ip-address correspondingly.

Setup

Installing

  1. Create the directory

sudo mkdir /opt/registry/
sudo mkdir /opt/registry/config-server
  1. Download the application

Download the latest version of the jhipster-registry from GitHub, e.g. by using

sudo wget -O /opt/registry/registry.jar https://github.com/jhipster/jhipster-registry/releases/download/v6.2.0/jhipster-registry-6.2.0.jar

Service configuration

  1. sudo vim /etc/systemd/system/registry.service

[Unit]
Description=Registry
After=syslog.target

[Service]
User=artemis
WorkingDirectory=/opt/registry
ExecStart=/usr/bin/java \
    -Xmx256m \
    -jar registry.jar \
    --spring.profiles.active=prod,native
SuccessExitStatus=143
StandardOutput=/opt/registry/registry.log
#StandardError=inherit

[Install]
WantedBy=multi-user.target
  1. Set Permissions in Registry Folder

sudo chown -R artemis:artemis /opt/registry
sudo chmod g+rwx /opt/registry
  1. Enable the service

sudo systemctl daemon-reload
sudo systemctl enable registry.service
  1. Start Service (only after performing steps 1-3 of the configuration)

sudo systemctl start registry
  1. Logging

sudo journalctl -f -n 1000 -u registry

Configuration

  1. sudo vim /opt/registry/application-prod.yml

logging:
    file:
        name: '/opt/registry/registry.log'

jhipster:
    security:
        authentication:
        jwt:
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES
    registry:
        password: AN-ADMIN-PASSWORD-THAT-MUST-BE-CHANGED
spring:
    security:
        user:
            password: AN-ADMIN-PASSWORD-THAT-MUST-BE-CHANGED
  1. sudo vim /opt/registry/bootstrap-prod.yml

jhipster:
    security:
        authentication:
        jwt:
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES
            secret: ''

spring:
    cloud:
        config:
        server:
            bootstrap: true
            composite:
            - type: native
              search-locations: file:./config-server
  1. sudo vim /opt/registry/config-server/application.yml

# Common configuration shared between all applications
configserver:
    name: Artemis JHipster Registry
    status: Connected to the Artemis JHipster Registry

jhipster:
    security:
        authentication:
        jwt:
            secret: ''
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES

eureka:
    client:
        service-url:
            defaultZone: http://admin:${jhipster.registry.password}@localhost:8761/eureka/

nginx config

You still have to make the registry available:

  1. sudo vim /etc/nginx/sites-available/registry.conf

server {
    listen 443 ssl http2;
    server_name REGISTRY_FQDN;
    ssl_session_cache shared:RegistrySSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:8761;
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_http_version      1.1;
        proxy_redirect          http://         https://;

        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;

        gzip off;
    }
}
  1. sudo ln -s /etc/nginx/sites-available/registry.conf /etc/nginx/sites-enabled/

This enables the registry in nginx

  1. sudo service nginx restart

This will apply the config changes and the registry will be reachable.

WebSockets

WebSockets should also be synchronized (so that a user connected to one instance can perform an action which causes an update to users on different instances, without having to reload the page - such as quiz starts). We use a so-called broker for this (named Apache ActiveMQ Artemis).

It relays message between instances:

../../_images/deployment_broker.drawio.png

Setup

  1. Create a folder to store ActiveMQ

sudo mkdir /opt/activemq-distribution
  1. Download ActiveMQ here: http://activemq.apache.org/components/artemis/download/

sudo wget -O /opt/activemq-distribution/activemq.tar.gz https://downloads.apache.org/activemq/activemq-artemis/2.13.0/apache-artemis-2.13.0-bin.tar.gz
  1. Extract the downloaded contents

cd /opt/activemq-distribution
sudo tar -xf activemq.tar.gz
  1. Navigate to the folder with the CLI

cd /opt/activemq-distribution/apache-artemis-2.13.0/bin
  1. Create a broker in the /opt/broker/broker1 directory, replace USERNAME and PASSWORD accordingly

sudo ./artemis create --user USERNAME --password PASSWORD --require-login /opt/broker/broker1
  1. Adjust the permissions

sudo chown -R artemis:artemis /opt/broker
sudo chmod g+rwx /opt/broker
  1. Adjust the configuration of the broker: sudo vim /opt/broker/broker1/etc/broker.xml

<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xmlns:xi="http://www.w3.org/2001/XInclude"
            xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:activemq:core ">

    <name>0.0.0.0</name>

    <journal-pool-files>10</journal-pool-files>

    <acceptors>
        <!-- STOMP Acceptor. -->
        <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;heartBeatToConnectionTtlModifier=6</acceptor>
    </acceptors>

    <connectors>
        <connector name="netty-connector">tcp://localhost:61616</connector>
    </connectors>

    <security-settings>
        <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
        </security-setting>
    </security-settings>

    <address-settings>
        <!--default for catch all-->
        <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    </address-settings>
</core>
</configuration>
  1. Service configuration: sudo vim /etc/systemd/system/broker1.service

[Unit]
Description=ActiveMQ-Broker
After=network.target

[Service]
User=artemis
WorkingDirectory=/opt/broker/broker1
ExecStart=/opt/broker/broker1/bin/artemis run


[Install]
WantedBy=multi-user.target
  1. Enable the service

sudo systemctl daemon-reload
sudo systemctl enable broker1
sudo systemctl start broker1

Configuration of Artemis

Add the following values to your Artemis config:

spring:
    websocket:
        broker:
            username: USERNAME
            password: PASSWORD
            addresses: "localhost:61613"

USERNAME and PASSWORD are the values used in step 5. Replace localhost if the broker runs on a separate machine.

File system

The last (and also easiest) part to configure is the file system: You have to provide a folder that is shared between all instances of the application server (e.g. by using NFS).

You then have to set the following values in the application config:

artemis:
    repo-clone-path: {{ artemis_repo_basepath }}/repos/
    repo-download-clone-path: {{ artemis_repo_basepath }}/repos-download/
    file-upload-path: {{ artemis_repo_basepath }}/uploads
    submission-export-path: {{ artemis_repo_basepath }}/exports

Where {{ artemis_repo_basepath }} is the path to the shared folder

The file system stores (as its names suggests) files, these are e.g. submissions to file upload exercises, repositories that are checked out for the online editor, course icons, etc.

Scheduling

Artemis uses scheduled tasks in various scenarios: e.g. to lock repositories on due date, clean up unused resources, etc. As we now run multiple instances of Artemis, we have to ensure that the scheduled tasks are not executed multiple times. Artemis uses to approaches for this:

  1. Tasks for quizzes (e.g. evaluation once the quiz is due) are automatically distributed (using Hazelcast)

  2. Tasks for other exercises are only scheduled on one instance:

You must add the Scheduling profile to exactly one instance of your cluster. This instance will then perform scheduled tasks whereas the other instances will not.

nginx configuration

You have to change the nginx configuration (of Artemis) to ensure that the load is distributed between all instances. This can be done by defining an upstream (containing all instances) and forwarding all requests to this upstream.

upstream artemis {
    server instance1:8080;
    server instance2:8080;
}

Overview

All instances can now communicate with each other on 3 different layers:

  • Database cache

  • WebSockets

  • File system

You can see the state of all connected instances within the registry:

It relays message between instances:

../../_images/registry.png

Alternative: Docker Compose Setup

Getting Started with Docker Compose

  1. Install Docker Desktop or Docker Engine and Docker CLI with the Docker Compose Plugin (docker compose command).

    We DON’T support the usage of the Compose standalone binary (docker-compose command) as its installation method is no longer supported by Docker.

    We recommend the latest version of Docker Desktop or Docker Engine and Docker CLI with Docker Compose Plugin. The minimum version for Docker Compose is 1.27.0+ as of this version the latest Compose file format is supported.

    Hint

    Make sure that Docker Desktop has enough memory (~ 6GB). To adapt it, go to Settings -> Resources.

  2. Check that all local network ports used by Docker Compose are free (e.g. you haven’t started a local MySQL server when you would like to start a Docker Compose instance of mysql)

  3. Run docker compose pull && docker compose up in the directory docker/

  4. Open the Artemis instance in your browser at https://localhost

  5. Run docker compose down in the directory docker/ to stop and remove the docker containers

Tip

The first docker compose pull command is just necessary the first time as an extra step; otherwise, Artemis will be built from source as you don’t already have an Artemis Image locally.

For Arm-based Macs, Dev boards, etc., you will have to build the Artemis Docker Image first, as we currently do not distribute Docker Images for these architectures.

Other Docker Compose Setups

../../_images/artemis-docker-file-structure.drawio.png

Overview of the Artemis Docker / Docker Compose structure

The easiest way to configure a local deployment via Docker is a deployment with a docker compose file. In the directory docker/ you can find the following docker compose files for different setups:

  • artemis-dev-mysql.yml: Artemis-Dev-MySQL Setup containing the development build of Artemis and a MySQL DB

  • artemis-dev-postgres.yml: Artemis-Dev-Postgres Setup containing the development build of Artemis and a PostgreSQL DB

  • artemis-prod-mysql.yml: Artemis-Prod-MySQL Setup containing the production build of Artemis and a MySQL DB

  • artemis-prod-postgres.yml: Artemis-Prod-Postgres Setup containing the production build of Artemis and a PostgreSQL DB

  • atlassian.yml: Atlassian Setup containing a Jira, Bitbucket and Bamboo instance (see Bamboo, Bitbucket and Jira Setup Guide for the configuration of this setup)

  • gitlab-gitlabci.yml: GitLab-GitLabCI Setup containing a GitLab and GitLabCI instance

  • gitlab-jenkins.yml: GitLab-Jenkins Setup containing a GitLab and Jenkins instance (see Gitlab Server Quickstart Guide for the configuration of this setup)

  • monitoring.yml: Prometheus-Grafana Setup containing a Prometheus and Grafana instance

  • mysql.yml: MySQL Setup containing a MySQL DB instance

  • nginx.yml: Nginx Setup containing a preconfigured Nginx instance

  • postgres.yml: Postgres Setup containing a PostgreSQL DB instance

Three example commands to run such setups:

docker compose -f docker/atlassian.yml up
docker compose -f docker/mysql.yml -f docker/gitlab-jenkins.yml up
docker compose -f docker/artemis-dev-postgres.yml up

Tip

There is also a single docker-compose.yml in the directory docker/ which mirrors the setup of artemis-prod-mysql.yml. This should provide a quick way, without manual changes necessary, for new contributors to startup an Artemis instance. If the documentation just mentions to run docker compose without a -f <file.yml> argument, it’s assumed you are running the command from the docker/ directory.

For each service being used in these docker compose files, a base service (containing similar settings) is defined in the following files:

  • artemis.yml: Artemis Service

  • mysql.yml: MySQL DB Service

  • nginx.yml: Nginx Service

  • postgres.yml: PostgreSQL DB Service

  • gitlab.yml: GitLab Service

  • jenkins.yml: Jenkins Service

For testing mails or SAML logins, you can append the following services to any setup with an artemis container:

  • mailhog.yml: Mailhog Service (email testing tool)

  • saml-test.yml: Saml-Test Service (SAML Test Identity Provider for testing SAML features)

An example command to run such an extended setup:

docker compose -f docker/artemis-dev-mysql.yml -f docker/mailhog.yml up

Warning

If you want to run multiple docker compose setups in parallel on one host, you might have to modify volume, container, and network names!

Folder structure

Base services (compose file with just one service) and setups (compose files with multiple services) should be located directly in docker/.
Additional files like configuration files, Dockerfile, … should be in a subdirectory with the base service or setup name (docker/<base service or setup name>/).

Artemis Base Service

Everything related to the Docker Image of Artemis (built by the Dockerfile) can be found in the Server Setup section. All Artemis-related settings changed in Docker Compose files are described here.

The artemis.yml base service (e.g. in the artemis-prod-mysql.yml setup) defaults to the latest Artemis Docker Image tag in your local docker registry.
If you want to build the checked-out version run docker compose build artemis-app before starting Artemis.
If you want a specific version from the GitHub container registry change the image: value to the desired image for the artemis-app service and run docker compose pull artemis-app.

Debugging with Docker

See the Debugging with Docker section for detailed information. In all development docker compose setups like artemis-dev-mysql.yml Java Remote Debugging is enabled by default.

Service, Container and Volume names

Service names for the usage within docker compose are kept short, like mysql, to make it easier to use them in a CLI.

Container and volume names are prepended with artemis- in order to not interfere with other container or volume names on your system.

Get a shell into the containers

Tip

To keep the documentation short, we will use the standard form of docker compose COMMAND from this point on. You can use the following commands also with the -f docker/<setup to be launched>.yml argument pointing to a specific setup.

  • app container: docker compose exec artemis-app bash or if the container is not yet running: docker compose run --rm artemis-app bash

  • mysql container: docker compose exec mysql bash or directly into mysql docker compose exec mysql mysql

Analog for other services.

Other useful commands

  • Start a setup in the background: docker compose up -d

  • Stop and remove containers of a setup: docker compose down

  • Stop, remove containers and volumes: docker compose down -v

  • Remove Artemis-related volumes/state: docker volume rm artemis-data artemis-mysql-data

    This is helpful in setups where you just want to delete the state of artemis but not of Jenkins and GitLab for instance.

  • Stop a service: docker compose stop <name of the service> (restart via docker compose start <name of the service>)

  • Restart a service: docker compose restart <name of the service>

  • Remove all local Docker containers: docker container rm $(docker ps -a -q)

  • Remove all local Artemis Docker images: docker rmi $(docker images --filter=reference="ghcr.io/ls1intum/artemis:*" -q)


Alternative: Kubernetes Setup

This section describes how to set up an environment deployed in Kubernetes.

Prerequisites:

Follow the links to install the tools which will be needed to proceed with the Kubernetes cluster setup.

  • Docker - v20.10.7

    Docker is a platform for developing, shipping and running applications. In our case, we will use it to build the images which we will deploy. It is also needed from k3d to create a cluster. The cluster nodes are deployed on Docker containers.

  • DockerHub Account

    Docker Hub is a service provided by Docker for finding and sharing container images. Account in DockerHub is needed to push the Artemis image which will be used by the Kubernetes deployment.

  • k3d - v4.4.7

    k3d is a lightweight wrapper to run k3s which is a lightweight Kubernetes distribution in Docker. k3d makes it very easy to create k3s clusters especially for local deployment on Kubernetes.

    Windows users can use choco to install it. More details can be found in the link under Other Installation Methods

  • kubectl - v1.21

    kubectl is the Kubernetes command-line tool, which allows you to run commands against Kubernetes clusters. It can be used to deploy applications, inspect and manage cluster resources, and view logs.

  • helm - v3.6.3

    Helm is the package manager for Kubernetes. We will use it to install cert-manager and Rancher

Setup Kubernetes Cluster

To be able to deploy Artemis on Kubernetes, you need to set up a cluster. A cluster is a set of nodes that run containerized applications. Kubernetes clusters allow for applications to be more easily developed, moved and managed.

With the following commands, you will set up one cluster with three agents as well as Rancher which is a platform for cluster management with an easy to use user interface.

IMPORTANT: Before you continue make sure Docker has been started.

  1. Set environment variables

    The CLUSTER_NAME, RANCHER_SERVER_HOSTNAME and KUBECONFIG_FILE environment variables need to be set so that they can be used in the next commands. If you don’t want to set environment variables you can replace their values in the commands. What you need to do is replace $CLUSTER_NAME with “k3d-rancher”, $RANCHER_SERVER_HOSTNAME with “rancher.localhost” and $KUBECONFIG_FILE with “k3d-rancher.yml”.

    For macOS/Linux:

    export CLUSTER_NAME="k3d-rancher"
    export RANCHER_SERVER_HOSTNAME="rancher.localhost"
    export KUBECONFIG_FILE="$CLUSTER_NAME.yaml"
    

    For Windows:

    $env:CLUSTER_NAME="k3d-rancher"
    $env:RANCHER_SERVER_HOSTNAME="rancher.localhost"
    $env:KUBECONFIG_FILE="${env:CLUSTER_NAME}.yaml"
    
  2. Create the cluster

    With the help of the commands block below you can create a cluster with one server and three agents at a total of four nodes. Your deployments will be distributed almost equally among the 4 nodes.

    Using k3d cluster list you can see whether your cluster is created and how many of its nodes are running.

    Using kubectl get nodes you can see the status of each node of the newly created cluster.

    You should also write the cluster configuration into the KUBECONFIG_FILE. This configuration will be later needed when you are creating deployments. You can either set the path to the file as an environment variable or replace it with “<path-to-kubeconfig-file>” when needed.

    For macOS/Linux:

    k3d cluster create $CLUSTER_NAME --api-port 6550 --servers 1 --agents 3 --port 443:443@loadbalancer --wait
    k3d cluster list
    kubectl get nodes
    k3d kubeconfig get $CLUSTER_NAME > $KUBECONFIG_FILE
    export KUBECONFIG=$KUBECONFIG_FILE
    

    For Windows:

    k3d cluster create $env:CLUSTER_NAME --api-port 6550 --servers 1 --agents 3 --port 443:443@loadbalancer --wait
    k3d cluster list
    kubectl get nodes
    k3d kubeconfig get ${env:CLUSTER_NAME} > $env:KUBECONFIG_FILE
    $env:KUBECONFIG=($env:KUBECONFIG_FILE)
    
  3. Install cert-manager

    cert-manager is used to add certificates and certificate issuers as resource types in Kubernetes clusters. It simplifies the process of obtaining, renewing and using those certificates. It can issue certificates from a variety of supported sources, e.g. Let’s Encrypt, HashiCorp Vault, Venafi.

    In our case, it will issue self-signed certificates to our Kubernetes deployments to secure the communication between the different deployments.

    Before the installation, you need to add the Jetstack repository and update the local Helm chart repository cache. cert-manager has to be installed in a separate namespace called cert-manager so one should be created as well. After the installation, you can check the status of the installation.

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    kubectl create namespace cert-manager
    helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4 --set installCRDs=true --wait
    kubectl -n cert-manager rollout status deploy/cert-manager
    
  4. Install Rancher

    Rancher is a Kubernetes management tool that allows you to create and manage Kubernetes deployments more easily than with the CLI tools.

    You can install Rancher using Helm - the package manager for Kubernetes. It has to be installed in a namespace called cattle-system and we should create such a namespace before the installation itself. During the installation, we set the namespace and the hostname on which Rancher will be accessible. Then we can check the installation status.

    For macOS/Linux:

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    helm repo update
    kubectl create namespace cattle-system
    helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=$RANCHER_SERVER_HOSTNAME --version 2.5.9 --wait
    kubectl -n cattle-system rollout status deploy/rancher
    

    For Windows:

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    helm repo update
    kubectl create namespace cattle-system
    helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=${env:RANCHER_SERVER_HOSTNAME} --version 2.5.9 --wait
    kubectl -n cattle-system rollout status deploy/rancher
    
  5. Open Rancher and update the password

Open Rancher on https://rancher.localhost/.

You will be notified that the connection is not private. The reason for that is that the Rancher deployment uses a self-signed certificate by an unknown authority ‘dynamiclistener-ca’. It is used for secure communication between internal components. Since it’s your local environment this is not an issue and you can proceed to the website. If you can’t continue using the Chrome browser, you can try with another browser, e.g. Firefox.

You will be prompted to set a password which later will be used to log in to Rancher. The password will often be used, so you shouldn’t forget it.

../../_images/rancher_password.png

Then you should save the Rancher Server URL, please use the predefined name.

../../_images/rancher_url.png

After saving, you will be redirected to the main page of Rancher, where you see your clusters. There will be one local cluster.

../../_images/rancher_cluster.png

You can open the workloads using the menu, there will be no workloads deployed at the moment.

../../_images/rancher_nav_workloads.png
../../_images/rancher_empty_workloads.png
  1. Create a new namespace in Rancher

Namespaces are virtual clusters backed by the same physical cluster. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Usually, different namespaces are created to separate environments deployments e.g. development, staging, production.

For our development purposes, we will create a namespace called artemis. It can be done easily using Rancher.

  1. Navigate to Namespaces using the top menu of Rancher

  2. Select Add Namespace to open the form for namespace creation

    ../../_images/rancher_namespaces.png
  3. Put artemis as namespace’s name and select the Create button

    ../../_images/rancher_create_namespace.png

Create DockerHub Repository

The Artemis image will be stored and managed in DockerHub. Kubernetes will pull it from there and deploy it afterwards.

After you log in to your DockerHub account you can create as many public repositories as you want. To create a repository you need to select the Create repository button.

DockerHub:

../../_images/dockerhub.png

Then fill in the repository name with artemis. Then use the Create button to create your repository.

../../_images/dockerhub_create_repository.png

Configure Docker ID (username)

The username in DockerHub is called Docker ID. You need to set your Docker ID in the artemis-deployment.yml resource so that Kubernetes knows where to pull the image from. Open the src/main/kubernetes/artemis/deployment/artemis-deployment.yml file and edit

template:
   spec:
   containers:
      image: <DockerId>/artemis

and replace <DockerId> with your docker ID in DockerHub

e.g. it will look like this:

template:
   spec:
   containers:
      image: mmehmed/artemis

Configure Artemis Resources

To run Artemis, you need to configure the Artemis’ User Management, Version Control and Continuous Integration. You can either run it with Jira, Bitbucket, Bamboo or Jenkins, GitLab. Make sure to configure the src/main/resources/config/application-artemis.yml file with the proper configuration for User Management, Version Control and Continuous Integration.

You should skip setting the passwords and token since the Docker image that we are going to build is going to include those secrets. You can refer to chapter Add/Edit Secrets for setting those values.

If you want to configure Artemis with Bitbucket, Jira, Bamboo you can set a connection to existing staging or production deployments. If you want to configure Artemis with local user management and no programming exercises continue with Configure Local User Management.

Configure Local User Management

If you want to run with local user management and no programming exercises setup follow the steps:

1. Go to the src/main/resources/config/application-artemis.yml file, and set use-external in the user-management section to false. If you have created an additional application-local.yml file as it is described in the Setup documentation, make sure to edit this one.

Another possibility is to add the variable directly in src/main/kubernetes/artemis/configmap/artemis-configmap.yml.

data:
   artemis.user-management.use-external: "false"

2. Remove the jira profile from the SPRING_PROFILES_ACTIVE field in the ConfigMap found at src/main/kubernetes/artemis/configmap/artemis-configmap.yml

Now you can continue with the next step Build Artemis

Build Artemis

Build the Artemis application war file using the following command:

./gradlew -Pprod -Pwar clean bootWar

Run Docker Build

Run Docker build and prepare the Artemis image to be pushed in DockerHub using the following command:

docker build  -t <DockerId>/artemis -f docker/artemis/Dockerfile .

This will create the Docker image by copying the war file which was generated by the previous command.

Push to Docker

Push the image to DockerHub from where it will be pulled during the deployment:

docker push <DockerId>/artemis

In case that you get an “Access denied” error during the push, first execute

docker login

and then try again the docker push command.

Configure Spring Profiles

ConfigMaps are used to store configuration data in key-value pairs.

You can change the current Spring profiles used for running Artemis in the src/main/kubernetes/artemis/configmap/artemis-configmap.yml file by changing SPRING_PROFILES_ACTIVE. The current ones are set to use Bitbucket, Jira and Bamboo. If you want to use Jenkins and GitLab please replace bamboo,bitbucket,jira with jenkins,gitlab. You can also change prod to dev if you want to run in development profile.

Deploy Kubernetes Resources

Kustomization files declare the resources that will be deployed in one place and with their help we can do the deployment with only one command.

Once you have your Artemis image pushed to Docker you can use the kustomization.yml file in src/main/kubernetes to deploy all the Kubernetes resources. You can do it by executing the following command:

kubectl apply -k src/main/kubernetes/artemis --kubeconfig <path-to-kubeconfig-file>

<path-to-kubeconfig-file> is the path where you created the KUBECONFIG_FILE.

In the console, you will see that the resources are created. It will take a little bit of time when you are doing this for the first time. Be patient!

../../_images/kubectl_kustomization.png

Add/Edit Secrets

Once you have deployed Artemis you need to add/edit the secrets so that it can run successfully.

Open Rancher using https://rancher.localhost/ and navigate to your cluster.

Then navigate to Secrets like shown below:

../../_images/rancher_secrets_menu.png

You will see list of all defined secret files

../../_images/rancher_secrets_list.png

Continue with artemis-secrets and you will see the values in the secret file. Then navigate to the edit page.

../../_images/rancher_secrets_edit.png

You can edit each secret you want or add more secrets. Once you select any value box the value itself will be shown and you can edit it.

../../_images/rancher_secrets_edit_page.png

After you are done you can save your changes and redeploy the Artemis workload.

Check the Deployments in Rancher

Open Rancher using https://rancher.localhost/ and navigate to your cluster.

It may take some time but in the end, you should see that all the workloads have Active status. In case there is a problem with some workloads you can check the logs to see what the issue is.

../../_images/rancher_workloads.png

You can open the Artemis application using the link https://artemis-app.artemis.rancher.localhost/

You will get the same “Connection is not private” issue as you did when opening https://rancher.localhost/. As said before this is because a self-signed certificate is used and it is safe to proceed.

It takes several minutes for the application to start. If you get a “Bad Gateway” error it may happen that the application has not been started yet. Wait several minutes and if you still have this issue or another one you can check out the pod logs (described in the next chapter).

Check out the Logs

Open the workload which logs you need to check. There is a list of pods. Open the menu for one of the pods and select View Logs. A pop-up with the logs will be opened.

../../_images/rancher_logs.png

Troubleshooting

If the Artemis application is successfully deployed but there is an error while trying to run the application, the reason is most likely related to the Artemis yml configuration files. One of the common errors is related to missing server.url variable. You can fix it by adding it as an environment variable to the Artemis deployment.

Set Additional Environment Variables

This chapter explains how you can set environment variables for your deployment in case you need it.

Open the Workloads view on Rancher

../../_images/rancher_workloads.png

Enter the details page of the Artemis workload and then select Edit in the three-dot menu

../../_images/workload_edit.png

Expand the Environment Variables menu. After pressing the Add Variable button two fields will appear where you can add the variable key and the value.

../../_images/workload_set_environment_variable.png

You can add as many variables as you want. Once you are done you can save your changes which will trigger the Redeploy of the application.