Skip to main content

Programming Exercise

Overview

Conducting a programming exercise consists of the following main steps:

  1. 1: Instructor prepares exercise: Set up repositories containing the exercise code and test cases, configure build instructions, and set up the exercise in Artemis
  2. 2-5: Students work on exercise: Students clone repositories, solve exercises, and submit solutions
  3. 6: Automated testing: The continuous integration server verifies submissions by executing test cases and provides feedback
  4. 7: Instructor reviews results: Review overall results of all students and react to common errors and problems
Exercise Workflow
Programming Exercise Workflow (detailed student perspective shown in diagram)
ℹ️

The workflow diagram above shows the detailed steps from the student perspective. For a complete step-by-step guide on how students participate in programming exercises, see the Programming Exercise Student Guide.

ℹ️

Artemis supports a wide range of programming languages and is independent of specific version control and continuous integration systems. Instructors have significant freedom in defining the test environment.

Exercise Templates

Artemis supports templates for various programming languages to simplify exercise setup. The availability of features depends on the continuous integration system (Local CI or Jenkins) and the programming language. Instructors can still use those templates to generate programming exercises and then adapt and customize the settings in the repositories and build plans.

Supported Programming Languages

The following table provides an overview of supported programming languages and their corresponding templates:

No.Programming LanguageLocal CIJenkinsBuild System/NotesDocker Image
1JavaGradle, Maven, DejaGnuartemis-maven-docker
2Pythonpipartemis-python-docker
3CMakefile, FACT, GCCartemis-c-docker
4HaskellStackartemis-haskell
5KotlinMavenartemis-maven-docker
6VHDLMakefileartemis-vhdl-docker
7AssemblerMakefileartemis-assembler-docker
8SwiftSwiftPMartemis-swift-swiftlint-docker
9OCamlDuneartemis-ocaml-docker
10Rustcargoartemis-rust-docker
11JavaScriptnpmartemis-javascript-docker
12Rbuilt-inartemis-r-docker
13C++CMakeartemis-cpp-docker
14TypeScriptnpmartemis-javascript-docker
15C#dotnetartemis-csharp-docker
16Gobuilt-inartemis-go-docker
17Bashbuilt-inartemis-bash-docker
18MATLABbuilt-inmatlab
19RubyRakeartemis-ruby-docker
20Dartbuilt-inartemis-dart-docker

Feature Support by Language

Feature Explanations

The features listed in the table above provide different capabilities for programming exercises.

Sequential Test Runs

Artemis can generate a build plan which first executes structural and then behavioral tests. This feature helps students better concentrate on the immediate challenge at hand by running tests in a specific order.

When enabled, structural tests (like checking class structure, method signatures) run first. Only if these pass will behavioral tests (testing actual functionality) execute. This prevents students from being overwhelmed by behavioral test failures when fundamental structural requirements aren't met yet.

Package Name

For some programming languages (primarily Java, Kotlin, Swift, and Go), you can specify a package name that will be used in the exercise template. This defines the package structure students must use in their code.

The package name is configured during exercise creation and helps maintain consistent code organization across student submissions.

Solution Repository Checkout

This feature allows instructors to compare a student submission against the sample solution in the solution repository. When available, instructors can view differences between student code and the reference solution side-by-side, making it easier to identify where students deviate from the expected approach.

Note: This feature is currently only supported for OCaml and Haskell exercises.

Auxiliary Repositories

Auxiliary repositories are additional repositories beyond the standard template, solution, and test repositories. They can be used to:

  • Provide additional resources needed during testing
  • Include libraries or dependencies
  • Overwrite template source code in testing scenarios

Each auxiliary repository has a name, checkout directory, and description. They are created during exercise setup and added to the build plan automatically. For more details, see the Auxiliary Repositories configuration section in Exercise Creation.

Feature Support by Language Matrix

Not all templates support the same feature set. The table below provides an overview of supported features for each language:

L = Local CI, J = Jenkins

No.LanguageSequential Test RunsStatic Code AnalysisPlagiarism CheckPackage NameProject TypeSolution Repository CheckoutAuxiliary Repositories
1JavaGradle, Maven, DejaGnuL: ✅
J: ❌
2PythonL: ✅
J: ❌
L: ✅
J: ❌
n/aL: ✅
J: ❌
3CL: ✅
J: ❌
FACT, GCCL: ✅
J: ❌
4HaskellL: ✅
J: ❌
n/aL: ✅
J: ❌
L: ✅
J: ❌
5Kotlinn/aL: ✅
J: ❌
6VHDLn/aL: ✅
J: ❌
7Assemblern/aL: ✅
J: ❌
8SwiftPlainL: ✅
J: ❌
9OCamln/aL: ✅
J: ❌
10RustL: ✅
J: ❌
n/aL: ✅
J: ❌
11JavaScriptL: ✅
J: ❌
n/aL: ✅
J: ❌
12RL: ✅
J: ❌
n/aL: ✅
J: ❌
13C++L: ✅
J: ❌
n/aL: ✅
J: ❌
14TypeScriptL: ✅
J: ❌
n/aL: ✅
J: ❌
15C#n/aL: ✅
J: ❌
16Gon/aL: ✅
J: ❌
17Bashn/aL: ✅
J: ❌
18MATLABn/aL: ✅
J: ❌
19RubyL: ✅
J: ❌
n/aL: ✅
J: ❌
20DartL: ✅
J: ❌
n/aL: ✅
J: ❌
ℹ️

Note: Only some templates for Local CI support Sequential Test Runs. Static Code Analysis for some exercises is also only supported for Local CI. Instructors can still extend the generated programming exercises with additional features that are not available in a specific template.

We encourage instructors to contribute improvements to the existing templates or to provide new templates. Please contact Stephan Krusche and/or create Pull Requests in the GitHub repository.

Exercise Creation

Creating a programming exercise consists of the following steps:

  1. Generate programming exercise in Artemis
  2. Update exercise code in the template, solution, and test repositories
  3. Adapt the build script (optional)
  4. Configure static code analysis (optional)
  5. Adapt the interactive problem statement
  6. Configure grading settings
  7. Verify the exercise configuration

Generate Programming Exercise

Open Course Management and navigate to Exercises of your preferred course.

Course Dashboard
Course Management - Exercises

Click on Generate new programming exercise.

Exercise Dashboard
Exercise Dashboard
ℹ️

The guided mode has been removed. Artemis now provides a validation bar to navigate through sections and help validate the form. Watch the screencast below for more information.

Artemis provides various options to customize programming exercises:

Naming Section
Programming Options Naming
Naming Configuration
  • Title: The title of the exercise. Used to create a project on the VCS server. Can be changed after creation
  • Short Name: Together with the course short name, this creates a unique identifier for the exercise across Artemis (including repositories and build plans). Cannot be changed after creation
  • Preview: Shows the generated repository and build plan names based on the short names
Auxiliary Repositories
Programming Options Auxiliary Repositories
Auxiliary Repositories Configuration
  • Auxiliary Repositories: Additional repositories with a name, checkout directory, and description. Created and added to the build plan when the exercise is created. Cannot be changed after exercise creation
ℹ️

Auxiliary repositories are checked out to the specified directory during automatic testing if a checkout directory is set. This can be used for providing additional resources or overwriting template source code in testing exercises.

Categories
Programming Options Categories
Categories Configuration
  • Categories: Define up to two categories per exercise. Categories are visible to students and should be used consistently to group similar exercises
Participation Mode and Options
Programming Options Participation Mode
Participation Mode Configuration
  • Difficulty: Information about the difficulty level for students
  • Participation Mode: Whether students work individually or in teams. Cannot be changed after creation. Learn more about team exercises
  • Team Size: For team mode, provide recommended team size. Instructors/Tutors define actual teams after exercise creation
  • Allow Offline IDE: Allow students to clone their repository and work with their preferred IDE
  • Allow Online Editor: Allow students to use the Artemis Online Code Editor
  • Publish Build Plan: Allow students to access and edit their build plan. Useful for exercises where students configure parts of the build plan themselves
⚠️

At least one of Allow Offline IDE or Allow Online Editor must be active.

Programming Language and Project Settings
Programming Options Programming Language
Programming Language Configuration
  • Programming Language: The programming language for the exercise. Artemis chooses the template accordingly
  • Project Type: Determines the project structure of the template. Not available for all languages
  • With exemplary dependency: (Java only) Adds an external Apache commons-lang dependency as an example
  • Package Name: The package name for this exercise. Not available for all languages
  • Enable Static Code Analysis: Enable automated code quality checks. Cannot be changed after creation. See Static Code Analysis Configuration section below
  • Sequential Test Runs: First run structural, then behavioral tests. Helps students focus on immediate challenges. Not compatible with static code analysis. Cannot be changed after creation
  • Customize Build Plan: Customize the build plan of your exercise. Available for all languages with Local CI and Jenkins. Can also be customized after creation
Version Control Settings
Programming Options Version Control
Version Control Configuration
  • Allow Custom Branches: Allow students to push to branches other than the default one
⚠️

Artemis does not show custom branches in the UI and offers no merge support. Pushing to non-default branches does not trigger builds or create submissions. Students are fully responsible for managing their branches. Only activate if absolutely necessary!

  • Regular Expression for branch name: Custom branch names are matched against this Java regular expression. Pushing is only allowed if it matches
Score Configuration
Programming Options Score
Score Configuration
  • Should this exercise be included in the course/exam score calculation?

    • Yes: Define maximum Points and Bonus points. Total points count toward course/exam score
    • Bonus: Achieved Points count as bonus toward course/exam score
    • No: Achieved Points do not count toward course/exam score
  • Submission Policy: Configure initial submission policy. Choose between:

    • None: Unlimited submissions
    • Lock Repository: Limit submissions; lock repository when limit is reached
    • Submission Penalty: Unlimited submissions with point deductions after limit

    See Submission Policy Configuration section for details.

ℹ️

Submission policies can only be edited on the Grading Page after initial exercise generation.

Timeline Configuration
Programming Options Timeline Manual
Timeline Configuration - Manual Assessment
Programming Options Timeline Automatic
Timeline Configuration - Automatic Assessment
  • Release Date: When the exercise becomes visible to students
  • Start Date: When students can start participating. If not set, students can participate immediately after release
  • Automatic Tests: Every commit triggers test execution (except tests specified to run after due date)
  • Due Date: Deadline for the exercise. Commits after this date are not graded
  • Run Tests after Due Date: Build and test the latest in-time submission of each student on this date. Must be after due date. Use for executing hidden tests
  • Assessment Type: Choose between Automatic Assessment or Manual Assessment. For manual assessment, tutors review submissions
  • Assessment Due Date: Deadline for manual reviews. All assessments are released to students on this date
  • Example Solution Publication Date: When the solution repository becomes available for students. If blank, never published
Assessment Options
Programming Options Assessment
Assessment Configuration
  • Complaint on Automatic Assessment: Allow students to complain about automatic assessment after due date. Only available if complaints are enabled in the course or for exam exercises
ℹ️

Using practice mode, students can still commit code and receive feedback after the due date. These results are not rated.

  • Manual feedback requests: Enable manual feedback requests, allowing students to request feedback before the deadline. Each student can make one request at a time
  • Show Test Names to Students: Show names of automated test cases to students. If disabled, students cannot differentiate between automatic and manual feedback
  • Include tests into example solution: Include test cases in the example solution so students can run tests locally
Static Code Analysis Configuration
Programming Options SCA
Static Code Analysis Configuration
  • Max Static Code Analysis Penalty: Available if static code analysis is active. Maximum points that can be deducted for code quality issues as a percentage (0-100%) of Points. Defaults to 100% if empty
ℹ️

Example: Given an exercise with 10 Points and Max Static Code Analysis Penalty of 20%, at most 2 points will be deducted for code quality issues.

Problem Statement and Instructions
Programming Options Problem Statement
Problem Statement Configuration
  • Problem Statement: The exercise description shown to students. See Adapt Interactive Problem Statement section
Programming Options Instructions
Grading Instructions Configuration
  • Grading Instructions: Available for Manual Assessment. Create instructions for tutors during manual assessment

Click Generate to create the exercise.

Result: Programming Exercise Created

Course Dashboard Exercise Programming
Programming Exercise in Course Dashboard

Artemis creates the following repositories:

  • Template: Template code that all students receive at the start. Can be empty
  • Test: Contains all test cases (e.g., JUnit-based) and optionally static code analysis configuration. Hidden from students
  • Solution: Solution code, typically hidden from students, can be made available after the exercise

Artemis creates two build plans:

  • Template (BASE): Basic configuration for template + test repository. Used to create student build plans
  • Solution (SOLUTION): Configuration for solution + test repository. Used to verify exercise configuration

Update Exercise Code in Repositories

You have two alternatives to update the exercise code:

Alternative 1: Clone and Edit Locally

  1. Clone the 3 repositories and adapt the code on your local computer in your preferred IDE
  2. To execute tests locally:
    • Copy template (or solution) code into an assignment folder (location depends on language)
    • Execute tests (e.g., using mvn clean test for Java)
  3. Commit and push your changes via Git
Special Notes for Haskell

The build file expects the solution repository in the solution subdirectory and allows a template subdirectory for easy local testing.

Convenient checkout script:

#!/bin/sh
# Arguments:
# $1: exercise short name
# $2: (optional) output folder name

if [ -z "$1" ]; then
echo "No exercise short name supplied."
exit 1
fi

EXERCISE="$1"
NAME="${2:-$1}"

# Adapt BASE to your repository URL
BASE="ssh://git@artemis.tum.de:7999/$EXERCISE/$EXERCISE"

git clone "$BASE-tests.git" "$NAME" && \
git clone "$BASE-exercise.git" "$NAME/template" && \
git clone "$BASE-solution.git" "$NAME/solution" && \
cp -R "$NAME/template" "$NAME/assignment" && \
rm -r "$NAME/assignment/.git/"
Special Notes for OCaml

Tests expect to be in a tests folder next to assignment and solution folders.

Convenient checkout script:

#!/bin/sh
# Arguments:
# $1: exercise short name
# $2: (optional) output folder name

PREFIX= # Set your course prefix

if [ -z "$1" ]; then
echo "No exercise short name supplied."
exit 1
fi

EXERCISE="$PREFIX$1"
NAME="${2:-$1}"

BASE="ssh://git@artemis.tum.de:7999/$EXERCISE/$EXERCISE"

git clone "$BASE-tests.git" "$NAME/tests"
git clone "$BASE-exercise.git" "$NAME/template"
git clone "$BASE-solution.git" "$NAME/solution"

# Hardlink assignment interfaces
rm "$NAME/template/src/assignment.mli"
rm "$NAME/tests/assignment/assignment.mli"
rm "$NAME/tests/solution/solution.mli"
ln "$NAME/solution/src/assignment.mli" "$NAME/template/src/assignment.mli"
ln "$NAME/solution/src/assignment.mli" "$NAME/tests/assignment/assignment.mli"
ln "$NAME/solution/src/assignment.mli" "$NAME/tests/solution/solution.mli"

Test script:

#!/bin/sh
dir="$(realpath ./)"

cd .. || exit 1
rm ./assignment
ln -s "$dir" ./assignment
cd tests || exit 1
./run.sh

Alternative 2: Edit in Browser

Open Edit in Editor in Artemis and adapt the code in the online editor.

Instructor Editor
Edit in Online Editor

You can switch between different repositories and Submit the code when done.

Testing Frameworks by Language

Write test cases in the Test repository using language-specific frameworks:

No.LanguagePackage ManagerBuild SystemTesting Framework
1JavaMaven / GradleMaven / GradleJUnit 5 with Ares
2Pythonpippippytest
3C-MakefilePython scripts / FACT
4HaskellStackStackTasty
5KotlinMavenMavenJUnit 5 with Ares
6VHDL-MakefilePython scripts
7Assembler-MakefilePython scripts
8SwiftSwiftPMSwiftPMXCTest
9OCamlopamDuneOUnit2
10Rustcargocargocargo test
11JavaScriptnpmnpmJest
12Rbuilt-in-testthat
13C++-CMakeCatch2
14TypeScriptnpmnpmJest
15C#NuGetdotnetNUnit
16Gobuilt-inbuilt-ingo testing
17Bash--Bats
18MATLABmpminstall-matlab.unittest
19RubyGemRakeminitest
20Dartpubbuilt-inpackage.test

Check the build plan results:

  • Template and solution build plans should not have Build Failed status
  • If the build fails, check build errors in the build plan
⚠️

Hint: Test cases should only reference code available in the template repository. If this is not possible, try the Sequential Test Runs option.

Adapt Build Script (Optional)

This section is optional. The preconfigured build script usually does not need changes.

If you need additional build steps or different configurations, activate Customize Build Script in the exercise create/edit/import screen. All changes apply to all builds (template, solution, student submissions).

Predefined build scripts in bash exist for all languages, project types, and configurations. Most languages clone the test repository into the root folder and the assignment repository into the assignment folder.

You can also use a custom Docker image. Make sure to:

  • Publish the image in a publicly available repository (e.g., DockerHub)
  • Build for both amd64 and arm64 architectures
  • Keep the image size small (build agents download before execution)
  • Include all build dependencies to avoid downloading in every build
ℹ️

Test custom exercises locally before publishing Docker images and uploading to Artemis for better development experience.

Edit Repositories Checkout Paths (Optional)

Only available with Integrated Code Lifecycle

Preconfigured checkout paths usually don't need changes.

Checkout paths depend on the programming language and project type:

Checkout Paths Preview
Checkout Paths Preview (Java)

To change checkout paths, click edit repositories checkout path:

Checkout Paths Edit
Edit Checkout Paths (Java)

Update the build script accordingly (see the Adapt Build Script section).

⚠️

Important:

  • Checkout paths can only be changed during exercise creation, not after
  • Depending on language/project type, some paths may be predefined and unchangeable
  • Changing paths can cause build errors if build script is not adapted
  • For C exercises with default Docker image, changing paths will cause build errors

Edit Maximum Build Duration (Optional)

Only available with Integrated Code Lifecycle

The default maximum build duration (120 seconds) usually doesn't need changes.

Use the slider to adjust the time limit for build plan execution:

Timeout Slider
Maximum Build Duration Slider

Edit Container Configuration (Optional)

Only available with Integrated Code Lifecycle

In most cases, the default container configuration does not need to be changed.

Currently, instructors can change whether the container has internet access, add additional environment variables, and configure resource limits such as CPU and memory.

Docker Flags Edit
Container Configuration

Disabling internet access can be useful if instructors want to prevent students from downloading additional dependencies during the build process. If internet access is disabled, the container cannot access the internet during the build process. Thus, it will not be able to download additional dependencies. The dependencies must then be included/cached in the Docker image.

Additional environment variables can be added to the container configuration. This can be useful if the build process requires specific variables to be set.

Instructors can also adjust resource limits for the container. The number of CPU cores allocated to the container can be modified, as well as the maximum amount of memory and memory swap that the container is allowed to use. These settings help ensure that resource usage is balanced while allowing for flexibility in configuring the build environment. If set too high, the specified values may be overwritten by the maximum restrictions set by the administrators. Contact the administrators for more information.

We plan to add more options to the container configuration in the future.

Configure Static Code Analysis

If static code analysis was activated, the Test repository contains configuration files.

For Java exercises, the staticCodeAnalysisConfig folder contains configuration files for each tool. Artemis generates default configurations with predefined rules. Instructors can freely customize these files.

On exercise import, configurations are copied from the imported exercise.

Supported Static Code Analysis Tools

No.LanguageSupported ToolsConfiguration File
1JavaSpotbugsspotbugs-exclusions.xml
Checkstylecheckstyle-configuration.xml
PMDpmd-configuration.xml
PMD CPD(via PMD plugin)
2PythonRuffruff-student.toml
3CGCC(via compiler flags)
8SwiftSwiftLint.swiftlint.yml
10RustClippyclippy.toml
11/14JavaScript/TypeScriptESLinteslint.config.mjs
12Rlintr.lintr
13C++Clang-Tidy.clang-tidy
19RubyRubocop.rubocop.yml
20Dartdart analyzeanalysis_options.yaml
ℹ️

Maven plugins for Java static code analysis tools provide additional configuration options. GCC can be configured by passing flags in the tasks. See GCC Documentation.

ℹ️

Instructors can completely disable specific tools by removing the plugin/dependency from the build file (pom.xml or build.gradle) or by altering the task/script that executes the tools in the build plan.

Special case: PMD and PMD CPD share a common plugin. To disable one or the other, instructors must delete the execution of the specific tool from the build plan.

Adapt Interactive Problem Statement

Click Edit on the programming exercise or navigate to Edit in Editor and adapt the interactive problem statement.

The initial example shows how to:

  • Integrate tasks
  • Link tests
  • Integrate interactive UML diagrams

Configure Grading

The grading configuration determines how test cases contribute to the overall score and allows configuration of code quality issue penalties.

General Actions

Configure Grading Update Information
Grading Configuration Update Information
  • Save Save: Save current grading configuration
  • Reset Reset: Reset the current grading configuration of the open tab to the default values. For Test Case Tab, all test cases are set to weight 1, bonus multiplier 1, and bonus points 0. For the Code Analysis Tab, the default configuration depends on the selected programming language.
  • Re-evaluate All Re-evaluate All: Re-evaluate scores using currently saved settings and existing feedback
  • Trigger All Trigger All: Trigger all build plans to create new results with updated configuration
⚠️

Artemis always grades new submissions with the latest configuration, but existing submissions might use outdated configuration. Artemis warns about grading inconsistencies with the Updated grading badge.

Test Case Configuration

Adapt test case contribution to the overall score or set grading based on tasks.

Configure Grading Test Cases
Test Case Grading Configuration
ℹ️

Artemis registers tasks and test cases from the Test repository using results from the Solution build plan. Test cases only appear after the first Solution build plan execution.

⚠️

If your problem statement does not contain tasks, task-based grading is not available. You can still configure grading based on individual test cases.

Left side configuration options:

  • Task/Test Name: Task names (bold) are from the problem statement. Test names are from the Test repository
  • Weight: Points for a test are proportional to weight (sum of weights as denominator). For tasks, weight is evenly distributed across test cases
  • Bonus multiplier: Multiply points for passing a test without affecting other test points. For tasks, multiplier applies to all contained tests
  • Bonus points: Add flat point bonus for passing a test. For tasks, bonus points are evenly distributed across test cases
  • Visibility: Control feedback visibility:
    • Always: Feedback visible immediately after grading
    • After Due Date: Feedback visible only after due date (or individual due dates)
    • Never: Feedback never visible to students. Not considered in score calculation
  • Passed %: Statistics showing percentage of students passing/failing the test
⚠️

Bonus points (score > 100%) are only achievable if at least one bonus multiplier > 1 or bonus points are given for a test case.

⚠️

For manual assessments, all feedback details are visible to students even if the due date hasn't passed for others. Set an appropriate assessment due date to prevent information leakage.

Examples:

ℹ️

Example 1: Given an exercise with 3 test cases, maximum points of 10 and 10 achievable bonus points. The highest achievable score is 200%. Test Case (TC) A has weight 2, TC B and TC C have weight 1 (bonus multipliers 1 and bonus points 0 for all test cases). A student that only passes TC A will receive 50% of the maximum points (5 points).

ℹ️

Example 2: Given the configuration of Example 1 with an additional bonus multiplier of 2 for TC A. Passing TC A accounts for (2 × 2) / (2 + 1 + 1) × 100 = 100% of the maximum points (10). Passing TC B or TC C accounts for 1/4 × 100 = 25% of the maximum points (2.5). If the student passes all test cases he will receive a score of 150%, which amounts to 10 points and 5 bonus points.

ℹ️

Example 3: Given the configuration of Example 2 with additional bonus points of 5 for TC B. The points achieved for passing TC A and TC C do not change. Passing TC B now accounts for 2.5 points plus 5 bonus points (7.5). If the student passes all test cases he will receive 10 (TC A) + 7.5 (TC B) + 2.5 (TC C) points, which amounts to 10 points and 10 bonus points and a score of 200%.

The right side displays statistics:

  • Weight Distribution: Impact of each test on score
  • Total Points: Percentage of points awarded per test case across all students

Code Analysis Configuration

Configure visibility and grading of code quality issues by category.

Configure Grading Code Analysis
Code Analysis Grading Configuration
ℹ️

The Code Analysis tab is only available if static code analysis is enabled for the exercise.

Issues are grouped into categories. The following table shows the category mappings for Java, Swift, and C:

CategoryDescriptionJavaSwiftC
Bad PracticeCode that violates recommended and essential coding practicesSpotbugs BAD_PRACTICE
Spotbugs I18N
PMD Best Practices
GCC BadPractice
Code StyleCode that is confusing and hard to maintainSpotbugs STYLE
Checkstyle blocks
Checkstyle coding
Checkstyle modifier
PMD Code Style
Swiftlint (all rules)
Potential BugsCoding mistakes, error-prone code or threading errorsSpotbugs CORRECTNESS
Spotbugs MT_CORRECTNESS
PMD Error Prone
PMD Multithreading
GCC Memory
Duplicated CodeCode clonesPMD CPD
SecurityVulnerable code, unchecked inputs and security flawsSpotbugs MALICIOUS_CODE
Spotbugs SECURITY
PMD Security
GCC Security
PerformanceInefficient codeSpotbugs PERFORMANCE
PMD Performance
DesignProgram structure/architecture and object designCheckstyle design
PMD Design
Code MetricsViolations of code complexity metrics or size limitationsCheckstyle metrics
Checkstyle sizes
DocumentationCode with missing or flawed documentationCheckstyle javadoc
Checkstyle annotation
PMD Documentation
Naming & FormatRules that ensure the readability of the source code (name conventions, imports, indentation, annotations, white spaces)Checkstyle imports
Checkstyle indentation
Checkstyle naming
Checkstyle whitespace
MiscellaneousUncategorized rulesCheckstyle miscellaneousGCC Misc
ℹ️

For Swift, only the category Code Style can contain code quality issues currently. All other categories displayed on the grading page are dummies.

ℹ️

The GCC SCA option for C does not offer categories by default. The issues were categorized during parsing with respect to the rules. For details on the default configuration and active rules, see the GCC - Static Code Analysis Default Configuration section below.

Other languages use categories defined by their static code analysis tool:

On the left side of the page, instructors can configure the static code analysis categories:

  • Category: The name of the category defined by Artemis
  • State:
    • INACTIVE: Code quality issues of an inactive category are not shown to students and do not influence the score calculation
    • FEEDBACK: Code quality issues of a feedback category are shown to students but do not influence the score calculation
    • GRADED: Code quality issues of a graded category are shown to students and deduct points according to the Penalty and Max Penalty configuration
  • Penalty: Artemis deducts the selected amount of points for each code quality issue from points achieved by passing test cases
  • Max Penalty: Limits the amount of points deducted for code quality issues belonging to this category
  • Detected Issues: Visualizes how many students encountered a specific number of issues in this category

Verify Exercise Configuration

Open the View page of the programming exercise.

Solution Template Result
Template and Solution Results

Verify:

  • Template result: Score 0% with "0 of X passed" (or "0 of X passed, 0 issues" with SCA)
  • Solution result: Score 100% with "X of X passed" (or "X of X passed, 0 issues" with SCA)
ℹ️

If static code analysis finds issues in template/solution, improve the code or disable the problematic rule.

Review Template/Solution Differences

Review differences between template and solution repositories to verify expected student changes.

Template Solution Diff
Template/Solution Comparison in Exercise Management

Click the Review Changes button to open the comparison view.

Template Solution Diff Example
Template/Solution Comparison View

Verify Problem Statement Integration

Click Edit. Below the problem statement, verify:

  • Test cases: OK
  • Hints: OK
Programming Edit Status
Problem Statement Verification Status

Feedback Analysis

After verifying the exercise configuration, the Feedback Analysis feature helps identify common student mistakes and improve grading efficiency.

Accessing Feedback Analysis:

  1. Navigate to Exercise Management
  2. Open the programming exercise
  3. Go to the grading section
  4. Click the Feedback Analysis tab
Feedback Analysis Overview
Feedback Analysis Overview

Key Features:

  1. Filtering and Sorting

    Filter by tasks, test cases, error categories, and occurrence frequency. Sort based on count or relevance.

    Feedback Analysis Filters
    Filtering Options
  2. Affected Students Overview

    View students affected by specific feedback and access their repositories for review.

    Feedback Analysis Affected Students
    Affected Students View
  3. Aggregated Feedback

    • To avoid many single entries in case of testcases using random values and generating similar feedback, Groups similar feedback messages together all feedback to highlight frequently occurring errors.
    • Displays grouped occurrence counts and relative frequencies to help instructors prioritize common issues.
  4. Detailed Inspection

    Click feedback entries to view full details in a modal window.

    Feedback Analysis Detail
    Detailed Feedback View
  5. Communication & Collaboration

    Create discussion channels directly from the feedback analysis view for collaborative grading.

    Feedback Analysis Channel
    Creating a Feedback Discussion Channel

Static Code Analysis Default Configuration

The following sections list the rules that are active for the default static code analysis configuration.

Java

Spotbugs

All tool categories and their rules are active by default except for the NOISE and EXPERIMENTAL category. Refer to the Spotbugs documentation for a description of all rules.

Checkstyle

This table contains all rules that are activated by default when creating a new programming exercise. You can suppress a complete category by changing its visibility in the grading settings. For a more fine-granular configuration, you can add or remove rules by editing the checkstyle-configuration.xml file. For a description of the rules refer to the Checkstyle documentation.

Category (Tool/Artemis)RuleProperties
Coding / Code StyleEmptyStatement
EqualsHashCode
HiddenFieldignoreConstructorParameter="true"
ignoreSetter="true"
setterCanReturnItsClass="true"
IllegalInstantiation
InnerAssignment
MagicNumber
MissingSwitchDefault
MultipleVariableDeclarations
SimplifyBooleanExpression
SimplifyBooleanReturn
Class Design / DesignFinalClass
HideUtilityClassConstructor
InterfaceIsType
VisibilityModifier
Block Checks / Code StyleAvoidNestedBlocks
EmptyBlock
NeedBraces
Modifiers / Code StyleModifierOrder
RedundantModifier
Size Violations / Code MetricsMethodLength
ParameterNumber
FileLength
LineLengthmax="120"
Imports / Naming & FormattingIllegalImport
RedundantImport
UnusedImportsprocessJavadoc="false"
Naming Conventions / Naming & FormattingConstantName
LocalFinalVariableName
LocalVariableName
MemberName
MethodName
ParameterName
TypeName
Whitespace / Naming & FormattingEmptyForIteratorPad
GenericWhitespace
MethodParamPad
NoWhitespaceAfter
NoWhitespaceBefore
OperatorWrap
ParenPad
TypecastParenPad
WhitespaceAfter
WhitespaceAround
Javadoc Comments / DocumentationInvalidJavadocPosition
JavadocMethod
JavadocType
JavadocStyle
MissingJavadocMethodallowMissingPropertyJavadoc="true"
allowedAnnotations="Override,Test"
tokens="METHOD_DEF,ANNOTATION_FIELD_DEF,COMPACT_CTOR_DEF"
Miscellaneous / MiscellaneousArrayTypeStyle
UpperEll
NewlineAtEndOfFile
Translation

PMD

For a description of the rules refer to the PMD documentation.

Category (Tool/Artemis)Rule
Best Practices / Bad PracticeAvoidUsingHardCodedIP
CheckResultSet
UnusedFormalParameter
UnusedLocalVariable
UnusedPrivateField
UnusedPrivateMethod
PrimitiveWrapperInstantiation
Code Style / Code StyleUnnecessaryImport
ExtendsObject
ForLoopShouldBeWhileLoop
TooManyStaticImports
UnnecessaryFullyQualifiedName
UnnecessaryModifier
UnnecessaryReturn
UselessParentheses
UselessQualifiedThis
EmptyControlStatement
Design / DesignCollapsibleIfStatements
SimplifiedTernary
UselessOverridingMethod
Error Prone / Potential BugsAvoidBranchingStatementAsLastInLoop
AvoidDecimalLiteralsInBigDecimalConstructor
AvoidMultipleUnaryOperators
AvoidUsingOctalValues
BrokenNullCheck
CheckSkipResult
ClassCastExceptionWithToArray
DontUseFloatTypeForLoopIndices
ImportFromSamePackage
JumbledIncrementer
MisplacedNullCheck
OverrideBothEqualsAndHashcode
ReturnFromFinallyBlock
UnconditionalIfStatement
UnnecessaryConversionTemporary
UnusedNullCheckInEquals
UselessOperationOnImmutable
Multithreading / Potential BugsAvoidThreadGroup
DontCallThreadRun
DoubleCheckedLocking
Performance / PerformanceBigIntegerInstantiation
Security / SecurityAll rules

PMD CPD

Artemis uses the following default configuration to detect code duplications for the category Copy/Paste Detection. For a description of the various PMD CPD configuration parameters refer to the PMD CPD documentation.

<!-- Minimum amount of duplicated tokens triggering the copy-paste detection -->
<minimumTokens>60</minimumTokens>
<!-- Ignore literal value differences when evaluating a duplicate block.
If true, foo=42; and foo=43; will be seen as equivalent -->
<ignoreLiterals>true</ignoreLiterals>
<!-- Similar to ignoreLiterals but for identifiers, i.e. variable names, methods names.
If activated, most tokens will be ignored, so minimumTokens must be lowered significantly -->
<ignoreIdentifiers>false</ignoreIdentifiers>

C

GCC

For a description of the rules/warnings refer to the GCC Documentation. For readability reasons the rule/warning prefix -Wanalyzer- is omitted.

Category (Tool/Artemis)Rule
Memory Management / Potential Bugsfree-of-non-heap
malloc-leak
file-leak
mismatching-deallocation
Undefined Behavior / Potential Bugsdouble-free
null-argument
use-after-free
use-of-uninitialized-value
write-to-const
write-to-string-literal
possible-null-argument
possible-null-dereference
Bad Practice / Bad Practicedouble-fclose
too-complex
stale-setjmp-buffer
Security / Securityexposure-through-output-file
unsafe-call-within-signal-handler
use-of-pointer-in-stale-stack-frame
tainted-array-index
Miscellaneous / MiscellaneousRules not matching to above categories
ℹ️

GCC output can still contain normal warnings and compilation errors. These will also be added to the Miscellaneous category. Usually it's best to disable this category, as it contains errors not related to the SCA. Therefore, if the warning/error does not belong to the first four categories above, it is not an SCA issue as of GCC 11.1.0.

Exercise Import

Exercise import copies repositories, build plans, interactive problem statement, and grading configuration from an existing exercise.

Import Steps

  1. Open Course Management

    Open Course Management and navigate to Exercises of your preferred course.

    Course Dashboard
    Course Management - Exercises
  2. Import Programming Exercise

    Click Import Programming Exercise.

    Import Programming Exercise
    Import Programming Exercise Button

    Select an exercise to import.

    Import Modal
    Exercise Selection Modal
    ℹ️

    Instructors can import exercises from courses where they are registered as instructors.

  3. Configure Import Options

    Import Options
    Import Options
    • Recreate Build Plans: Create new build plans instead of copying from imported exercise
    • Update Template: Update template files in repositories. Useful for outdated exercises. For Java, replaces JUnit4 with Ares (JUnit5) and updates dependencies. May require test case adaptation
    ℹ️

    Recreate Build Plans and Update Template are automatically set if the static code analysis option changes compared to the imported exercise. The plugins, dependencies, and static code analysis tool configurations are added/deleted/copied depending on the new and the original state of this option.

  4. Complete Import

    Fill mandatory values and click Import.

    ℹ️

    The interactive problem statement can be edited after import. Some options like Sequential Test Runs cannot be changed on import.

Manual Assessment

You can use the online editor to assess student submissions directly in the browser.

Online Editor Assessment
Manual Assessment in Online Editor

The online editor provides features tailored to assessment:

  1. File browser: Shows student submission files. Changed files are highlighted in yellow
  2. Build output: Shows build process output, useful for build errors
  3. Read-only editor: View student code with changed lines highlighted
  4. Instructions: Provides structured grading criteria and problem statement, including tasks successfully solved (determined by test cases). Review test cases by clicking the passing test count next to tasks
  5. Result: Top right corner shows current submission result. Click to review test cases and feedback

Add feedback directly in source code by hovering over a line and clicking Add Feedback. Alternatively, press "+" key when editor is focused to open feedback widget at cursor line.

Online Editor Feedback
Editing Feedback in Online Editor

After clicking Add Feedback:

  • Enter feedback comment and score
  • Or drag structured assessment criteria from instructions to feedback area
  • Click Save to save
  • Click Cancel to discard
General Feedback
General Feedback in Online Editor

Add general feedback (not tied to specific file/line) by scrolling to bottom and clicking Add General Feedback.

Save changes with Save (top right). Finalize assessment with Submit Assessment.

ℹ️

You can save multiple times before submitting. Once submitted, you cannot make changes unless you are an instructor.

Repository Access Configuration

If you are a student looking for repository access information, see the student documentation on integrated code lifecycle setup.

The following sections explain repository types and access rights for different user roles.

Repository Types

Repository TypeDescription
BaseRepositories set up during exercise creation (template, solution, tests, auxiliary repositories)
Student AssignmentStudent's assignment repository copied from template. Includes team assignments
Teaching Assistant AssignmentAssignment repository created by a Teaching Assistant for themselves
Instructor AssignmentAssignment repository created by editor/instructor. Not available for exam exercises
Student PracticeStudent's practice repository (from template or assignment). Created after due date. Not available for exams
Teaching Assistant PracticePractice repository created by Teaching Assistant. Not available for exams
Instructor PracticePractice repository created by editor/instructor. Not available for exams
Instructor Exam Test RunTest run repository for exam testing before release. Should be deleted before exam

User Roles

RoleDescription
StudentCourse student
Teaching AssistantCourse tutor
EditorCourse editor
InstructorCourse instructor
ℹ️

Editors and instructors have the same access rights in the table below.

Time Periods

  • Before start: Before exercise/exam start date
  • Working time: After start date, before due/end date
  • After due: After due/end date
ℹ️

For Instructor Exam Test Run, "Before start" is the test run start date and "After due" is the test run end date (both before exam start).

Read access (R) includes git fetch, git clone, git pull. Write access (W) corresponds to git push.

Access Rights Table

Repository TypeRoleTime PeriodAccess
BaseStudentallnone
Teaching AssistantallR
InstructorallR/W
Student AssignmentStudentBefore startnone
StudentWorking timeR/W
StudentAfter due
Teaching AssistantallR
InstructorallR/W
Teaching Assistant AssignmentStudentallnone
Teaching AssistantBefore startR
Teaching AssistantWorking timeR/W
Teaching AssistantAfter dueR
InstructorallR/W
Instructor AssignmentStudentallnone
Teaching AssistantallR
InstructorallR/W²
Student PracticeStudentBefore startnone
StudentWorking timenone
StudentAfter dueR/W
Teaching AssistantBefore startnone
Teaching AssistantWorking timenone
Teaching AssistantAfter dueR
InstructorBefore startnone
InstructorWorking timenone
InstructorAfter dueR/W
Teaching Assistant PracticeStudentallnone
Teaching AssistantBefore startnone
Teaching AssistantWorking timenone
Teaching AssistantAfter dueR/W
InstructorBefore startnone
InstructorWorking timenone
InstructorAfter dueR/W
Instructor PracticeStudentallnone
Teaching AssistantBefore startnone
Teaching AssistantWorking timenone
Teaching AssistantAfter dueR
InstructorBefore startnone
InstructorWorking timenone
InstructorAfter dueR/W
Instructor Exam Test RunStudentallnone
Teaching AssistantallR
InstructorallR/W

Notes:

  1. Only valid for course exercises. Students cannot read exam exercise repositories after due date.

  2. Instructors can access Instructor Assignment repository via online editor from Course Management (Edit in editor) or Course Overview (Open code editor). After due date, push only via online editor from Course Management or local Git client. Online editor from Course Overview shows locked repository.

ℹ️

Practice repositories, Teaching Assistant assignment, and instructor assignment repositories only exist for course exercises.

Testing with Ares

Ares is a JUnit 5 extension for easy and secure Java testing on Artemis.

Main features:

  • Security manager to prevent students from crashing tests or cheating
  • More robust tests and builds with limits on time, threads, and I/O
  • Support for public and hidden Artemis tests with custom due dates
  • Utilities for improved feedback (multiline error messages, exception location hints)
  • Utilities to test exercises using System.out and System.in

For more information see Ares GitHub

Best Practices for Writing Test Cases

The following sections describe best practices for writing test cases. Examples are specifically for Java (using Ares/JUnit5), but practices can be generalized for other languages.

General Best Practices

Write Meaningful Comments for Tests

Comments should contain:

  • What is tested specifically
  • Which task from problem statement is addressed
  • How many points the test is worth
  • Additional necessary information

Keep information consistent with Artemis settings like test case weights.

/**
* Tests that borrow() in Book successfully sets the available attribute to false
* Problem Statement Task 2.1
* Worth 1.5 Points (Weight: 1)
*/
@Test
public void testBorrowInBook() {
// Test Code
}

Better yet, use comments in display names for manual correction:

@DisplayName("1.5 P | Books can be borrowed successfully")
@Test
public void testBorrowInBook() {
// Test Code
}
Use Appropriate and Descriptive Names for Test Cases

Test names are used for statistics. Avoid generic names like test1, test2, test3.

@Test
public void testBorrowInBook() {
// Test Code
}

If tests are in different (nested) classes, add class name to avoid duplicates:

@Test
public void test_LinkedList_add() {
// Test Code
}
ℹ️

For Java: If all test methods are in a single class, this is unnecessary (compiler won't allow duplicate methods).

Use Appropriate Timeouts for Test Cases

For regular tests, @StrictTimeout(1) (1 second) is usually sufficient. For shorter timeouts:

@Test
@StrictTimeout(value = 500, unit = TimeUnit.MILLISECONDS)
public void testBorrowInBook() {
// Test Code
}

Can also be applied to entire test class.

⚠️

Remember tests run on CI servers (build agents). Tests execute slower than on local machines.

Avoid Assert Statements

Use conditional fail() calls instead to hide confusing information from students.

Not recommended:

@Test
public void testBorrowInBook() {
Object book = newInstance("Book", 0, "Some title");
invokeMethod(book, "borrow");
assertFalse((Boolean) invokeMethod(book, "isAvailable"),
"A borrowed book must be unavailable!");
}

Shows: org.opentest4j.AssertionFailedError: A borrowed book must be unavailable! ==> Expected <false> but was <true>

Recommended:

@Test
public void testBorrowInBook() {
Object book = newInstance("Book", 0, "Some title");
invokeMethod(book, "borrow");
if ((Boolean) invokeMethod(book, "isAvailable")) {
fail("A borrowed book is not available anymore!");
}
}

Shows: org.opentest4j.AssertionFailedError: A borrowed book is not available anymore!

Write Tests Independent of Student Code

Students can break anything. Use reflective operations instead of direct code references.

Not recommended (causes build errors):

@Test
public void testBorrowInBook() {
Book book = new Book(0, "Some title");
book.borrow();
if (book.isAvailable()) {
fail("A borrowed book must be unavailable!");
}
}

Recommended (provides meaningful errors):

@Test
public void testBorrowInBook() {
Object book = newInstance("Book", 0, "Some title");
invokeMethod(book, "borrow");
if ((Boolean) invokeMethod(book, "isAvailable")) {
fail("A borrowed book must be unavailable!");
}
}

Error message: The class 'Book' was not found within the submission. Make sure to implement it properly.

Check for Hard-Coded Student Solutions

Students may hardcode values to pass specific tests. Verify solutions fulfill actual requirements, especially in exams.

Avoid Relying on Specific Task Order

Tests should cover one aspect without requiring different parts to be implemented.

Example: Testing translate and runService methods where runService calls translate.

Not recommended (assumes translate is implemented):

@Test
public void testRunServiceInTranslationServer() {
String result = translationServer.runService("French", "Dog");
assertEquals("Dog:French", result);
}

Recommended (overrides translate to test runService independently):

@Test
public void testRunServiceInTranslationServer() {
TranslationServer testServer = new TranslationServer() {
public String translate(String word, String language) {
return word + ":" + language;
}
};
String expected = "Dog:French";
String actual = testServer.runService("French", "Dog");
if(!expected.equals(actual)) fail("Descriptive fail message");
}
⚠️

Handle students making classes/methods final via problem statement or tests to avoid compilation errors.

Catch Possible Student Errors

Handle student mistakes appropriately. For example, null returns can cause NullPointerException.

@Test
public void testBorrowInBook() {
Object book = newInstance("Book", 0, "Some title");
Object result = invokeMethod(book, "getTitle");
if (result == null) {
fail("getTitle() returned null!");
}
// Continue with test
}

Java Best Practices

Use Constant String Attributes for Base Package

Avoid repeating long package identifiers:

private static final String BASE_PACKAGE = "de.tum.in.ase.pse.";

@Test
public void testBorrowInBook() {
Object book = newInstance(BASE_PACKAGE + "Book", 0, "Some title");
// Test Code
}
Use JUnit5 and Ares Features

More information: JUnit5 Documentation and Ares GitHub

Useful features:

Define Custom Annotations

Combine annotations for better readability:

@Test
@StrictTimeout(10)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
public @interface LongTest {
}
Consider Using jqwik for Property-Based Testing

jqwik allows testing with arbitrary inputs and shrinks errors to excellent counter-examples (usually edge cases).

Eclipse Compiler and Best-Effort Compilation

Use Eclipse Java Compiler for partial, best-effort compilation. Useful for exam exercises and complicated generics.

Compilation errors are transformed into errors thrown where code doesn't compile (method/class level). The body is replaced with throw new Error("Unresolved compilation problems: ...").

⚠️

Important: Only method/class bodies should fail to compile, not complete test classes. Anything outside method/nested class bodies must compile, including method signatures, return types, parameter types, and lambdas. Use nested classes for fields/methods with student class types that might not compile.

ℹ️

The Eclipse Compiler may not support the latest Java version. You can compile student code with the latest Java and test code with the previous version.

See old documentation for Maven configuration examples.

Common Pitfalls / Problems
  • Reflection API limitation: Constant attributes (static final primitives/Strings) are inlined at compile-time, making them impossible to change at runtime
  • Long output: Arrays or Strings with long output may be unreadable or truncated after 5000 characters

Submission Policy Configuration

Submission policies define the effect of a submission on participant progress. A programming exercise can have zero or one submission policy (never more than one). Policies are specified during exercise creation and can be adjusted in the grading configuration later.

ℹ️

Definition: One submission = one push to the exercise repository by the participant that triggers automatic tests resulting in feedback. Automatic test runs triggered by instructors are not counted as submissions.

Submission Policy Types

Choosing the right policy depends on the exercise and teaching style. Lock repository and submission penalty policies combat trial-and-error solving approaches.

1. None

No submission policy. Participants can submit as often as they want until the due date.

Submission Policy None
No Submission Policy

2. Lock Repository

Participants can submit a fixed number of times. After reaching the limit, the repository is locked and further submissions are prevented.

Submission Policy Lock Repository
Lock Repository Policy Configuration

With the example configuration shown above, participants can submit 5 times. After the 5th submission, Artemis locks the repository, preventing further pushes.

⚠️

If locking fails and the participant submits again, Artemis attempts to lock again and sets the new result to Not Graded.

ℹ️

The participant may still work on their solution locally, but cannot submit it to Artemis to receive feedback.

3. Submission Penalty

Participants can submit as often as they want. For each submission exceeding the limit, the penalty is deducted from the score.

Submission Policy Submission Penalty
Submission Penalty Policy Configuration

With the example configuration above:

  • First 3 submissions: no penalty
  • 4th submission: 1.5 points deducted
  • 5th submission: 3 points deducted (1.5 × 2 submissions exceeding limit)
  • Score cannot be negative

Example: Student achieves 6 out of 12 points on 4th submission. With 1.5 point penalty, final score is 4.5 out of 12.

Students receive feedback explaining the deduction:

Submission Penalty Feedback
Submission Penalty Feedback Example
Submission Penalty Feedback 2
Submission Penalty Feedback Details

Updating Submission Policies

After generating an exercise, submission policies can be updated on the grading page.

Submission Policy Grading Page
Submission Policy Management on Grading Page

(De)activating Submission Policies

  • Active policy: Shows Deactivate button
  • Inactive policy: Shows Activate button

When deactivated, Artemis no longer enforces the policy. Locked repositories are unlocked. For submission penalty policies, press Re-evaluate All to apply changes.

Updating Submission Policies

Modify configuration and press Update. The effect of the former policy is removed and the new policy is applied. For submission penalty policies, press Re-evaluate All to update results.

Deleting Submission Policies

Select None as policy type and press Update. Locked repositories are unlocked. For submission penalty policies, press Re-evaluate All to revert effects.

Java DejaGnu: Blackbox Testing

Classical testing frameworks like JUnit allow writing whitebox tests, which enforce assumptions about code structure (class names, method names, signatures). This requires specifying all structural aspects for tests to run on student submissions. That may be okay or even desired for a beginner course.

For advanced courses, this is a downside: students cannot make their own structural decisions and gain experience in this important programming aspect.

DejaGnu enables blackbox tests for command line interfaces. Tests are written in Expect Script (extension of Tcl). Expect is a Unix utility for automatic interaction with programs exposing text terminal interfaces in a robust way.

Test scripts:

  • Start the program as a separate process (possibly several times)
  • Interact via textual inputs (standard input)
  • Read outputs and make assertions (exact or regex matching)
  • Decide next inputs based on output, simulating user interaction

For exercises, only specify:

  • Command line interface syntax
  • Rough output format guidance

Source code structure is up to students as far as you want.

Source code structure quality assessment can be done manually after submission deadline. The template uses Maven to compile student code, so it can be extended with regular unit tests (e.g., architecture tests for cyclic package dependencies) and report the results for both to the student.

Usage: Consult official documentation and initial test-repository content. DejaGnu files are in the testsuite directory. The ….tests directory contains three example test execution scripts.

Example: PROGRAM_TEST {add x} {} puts "add x" into the program and expects no output.

Helper functions like PROGRAM_TEST are defined in config/default.exp.

Variables in SCREAMING_SNAKE_CASE (e.g., MAIN_CLASS) are replaced with actual values in previous build plan steps. For example, the build plan finds the Java class with main method and replaces MAIN_CLASS.

Best Expect documentation: Exploring Expect book. The Artemis default template contains reusable helper functions in config/default.exp for common I/O use cases.

This exercise type makes it quite easy to reuse existing exercises from the Praktomat autograder system.

Sending Feedback back to Artemis

By default, unit test results are extracted and sent to Artemis automatically. Only custom setups may need semi-automatic approaches.

Jenkins

In Jenkins CI, test case feedback is extracted from XML files in JUnit format. The Jenkins plugin reads files from a results folder in the Jenkins workspace top level. Regular unit test files are copied automatically.

For custom test case feedback, create a customFeedbacks folder at workspace top level. In this folder, create JSON files for each test case feedback:

{
"name": string,
"successful": boolean,
"message": string
}
  • name: Test case name as shown on 'Configure Grading' page. Must be non-null and non-empty
  • successful: Indicates test case success/failure. Defaults to false if not present
  • message: Additional information shown to student. Required for non-successful tests, optional otherwise

Integrated Code Lifecycle

The Artemis Integrated Code Lifecycle allows using programming exercises fully integrated within Artemis, without external tools. Find more information in the Integrated Code Lifecycle documentation.