diff --git a/README.md b/README.md
index 1570a2e071ffb604c9dbf65df23e24eec11adac3..28fec4eb5e967485ac4bfc3ef0312c49fbcbd842 100644
--- a/README.md
+++ b/README.md
@@ -106,4 +106,43 @@ echo "Time elapsed: ${hour}  hour $min min " >>${pred_log}
 
 ## Job auf GPU legen 
 
+siehe ```batch_pipeline.sh```
 
+```
+#!/bin/bash
+#SBATCH --job-name=pipe1
+#SBATCH --gres=gpu:1
+#SBATCH --cpus-per-gpu=8
+#SBATCH --mem-per-cpu=4G
+
+cd /path/to/repo
+source dbgpt-hub/bin/activate
+
+sh pipeline.sh
+```
+```--gres=gpu:1``` ist dabei die Anzahl der angeforderten GPUs
+```/path/to/repo``` anpassen
+
+Dann mit ```sbatch``` Job auf GPU legen:
+```
+sbatch --nodelist=workg01 batch_pipeline.sh
+```
+Zur Wahl stehen zurzeit ```workg01``` (40GB/GPU) und ```workg02``` (80GB/GPU) mit jeweils 4 GPUs
+
+Mit ```squeue``` können dann laufende Jobs gecheckt werden
+
+## Evaluation
+
+```
+cd Spider
+python spider/evaluation.py --gold [gold file] --pred [predicted file] --etype [evaluation type] --db [database dir] --table [table file]
+
+arguments:
+  [gold file]        gold.sql file where each line is `a gold SQL \t db_id`
+  [predicted file]   predicted sql file where each line is a predicted SQL
+  [evaluation type]  "match" for exact set matching score, "exec" for execution score, and "all" for both
+  [database dir]     directory which contains sub-directories where each SQLite3 database is stored
+  [table file]       table.json file which includes foreign key info of each database
+
+Bsp.:
+python spider/evaluation.py --gold dev_gold.sql --pred ../output/pred/pred_sqlcoder_llama3_2.sql --etype "match" --db database --table tables.json
\ No newline at end of file