This tutorial walks you through running Shock locally and performing common operations using curl. By the end you will know how to create nodes, upload and download files, set metadata, query by attributes, and manage access control.
- Docker and Docker Compose
curl(comes pre-installed on most systems)- Optional:
jqfor pretty-printing JSON responses
From the repository root:
docker-compose up -dThis starts Shock (port 7445) and MongoDB. Verify the server is running:
curl http://localhost:7445/ | jq .You should see a JSON response with the Shock version and server information.
Create an empty node (metadata only, no file):
curl -X POST http://localhost:7445/node | jq .The response contains a data.id field -- this is your node's UUID. Save it for later:
NODE_ID=<paste-your-uuid-here>Upload a file by creating a new node with a file attached:
curl -X POST -F 'upload=@myfile.txt' http://localhost:7445/node | jq .The response includes the file name, size, and MD5 checksum under data.file. Save the node ID:
NODE_ID=$(curl -s -X POST -F 'upload=@myfile.txt' http://localhost:7445/node | jq -r .data.id)
echo $NODE_IDYou can also upload a file to an existing node:
curl -X PUT -F 'upload=@myfile.txt' http://localhost:7445/node/$NODE_ID | jq .Download the file stored in a node:
curl -OJ "http://localhost:7445/node/${NODE_ID}?download"The -OJ flags tell curl to save the file using the server-provided filename.
To just stream to stdout:
curl "http://localhost:7445/node/${NODE_ID}?download"Attributes are free-form JSON metadata attached to a node. Add them with an inline string:
curl -X PUT -F 'attributes_str={"project":"my-experiment", "sample_nr": 1, "organism": "E. coli"}' \
http://localhost:7445/node/$NODE_ID | jq .Or from a JSON file:
echo '{"project":"my-experiment", "sample_nr": 1}' > attrs.json
curl -X PUT -F 'attributes=@attrs.json' http://localhost:7445/node/$NODE_ID | jq .Search for nodes by attribute values:
# Find all nodes in a project
curl "http://localhost:7445/node?query&project=my-experiment" | jq .
# Limit results
curl "http://localhost:7445/node?query&project=my-experiment&limit=10" | jq .
# Paginate with offset
curl "http://localhost:7445/node?query&project=my-experiment&limit=10&offset=20" | jq .Retrieve the full metadata for a node:
curl http://localhost:7445/node/$NODE_ID | jq .curl -X DELETE http://localhost:7445/node/$NODE_ID | jq .When authentication is enabled, nodes have access control lists. View a node's ACLs:
curl http://localhost:7445/node/$NODE_ID/acl/ | jq .Grant read access to a user:
curl -X PUT "http://localhost:7445/node/$NODE_ID/acl/read?users=username" | jq .Remove read access:
curl -X DELETE "http://localhost:7445/node/$NODE_ID/acl/read?users=username" | jq .ACL types: read, write, delete, owner.
Shock can store files in S3-compatible object storage. For local development and testing, you can use MinIO as an S3 backend.
docker-compose -f docker-compose.minio.yml up -d shock-mongo shock-minio shock-minio-init shock-serverThis starts:
- MinIO -- S3-compatible object store (API on port 9000, console on port 9001)
- MongoDB -- metadata storage
- Shock -- configured with auto-upload to MinIO
Upload a file:
NODE_ID=$(curl -s -X POST -F 'upload=@myfile.txt' http://localhost:7445/node | jq -r .data.id)
echo $NODE_IDCheck the node's storage locations (the auto-upload worker copies the file to MinIO in the background):
# Wait a moment for the auto-upload, then check locations
curl http://localhost:7445/node/$NODE_ID | jq .data.locationsYou should see an entry for the S3 location once the upload completes.
Open the MinIO console at http://localhost:9001 (login: minioadmin / minioadmin) to see the uploaded files in the shock-data bucket.
docker-compose -f docker-compose.minio.yml down -v- Configuration Guide -- customize Shock for your environment
- API Reference -- full REST API documentation
- Caching and Data Migration -- set up multi-tier storage
- Data Types -- configure node types and priorities
- Use Cases -- real-world deployment examples