S3-Compatible Access
Oceanum Storage provides an S3-compatible API, allowing you to use standard AWS S3 tools and libraries to interact with your storage.
Endpoint Configuration
Section titled “Endpoint Configuration”| Setting | Value |
|---|---|
| Endpoint URL | https://storage.oceanum.io |
| Region | auto |
Authentication
Section titled “Authentication”You’ll need your Oceanum.io access credentials to authenticate:
- Log in to Oceanum.io Platform
- Navigate to your account settings
- Generate or retrieve your access key and secret key
Using the AWS CLI
Section titled “Using the AWS CLI”Installation
Section titled “Installation”Install the AWS CLI if you haven’t already:
pip install awscliConfiguration
Section titled “Configuration”Configure a profile for Oceanum Storage:
aws configure --profile oceanumEnter your credentials when prompted:
- AWS Access Key ID: Your Oceanum access key
- AWS Secret Access Key: Your Oceanum secret key
- Default region name:
auto - Default output format:
json
Basic Commands
Section titled “Basic Commands”List buckets:
aws s3 --endpoint-url=https://storage.oceanum.io lsList contents of a bucket:
aws s3 --endpoint-url=https://storage.oceanum.io ls s3://my-org-bucket/Upload a file:
aws s3 --endpoint-url=https://storage.oceanum.io cp myfile.nc s3://my-org-bucket/data/Download a file:
aws s3 --endpoint-url=https://storage.oceanum.io cp s3://my-org-bucket/data/myfile.nc ./Sync a directory:
aws s3 --endpoint-url=https://storage.oceanum.io sync ./local-datadir s3://my-org-bucket/sensors:::tip Example Here’s a real-world example syncing sensor data to an organization bucket:
aws s3 --endpoint-url=https://storage.oceanum.io sync ./root_datadir s3://oceanum-eda-org-port-taranaki/sensors:::
Using boto3 (Python)
Section titled “Using boto3 (Python)”import boto3
# Create clients3 = boto3.client( 's3', endpoint_url='https://storage.oceanum.io', aws_access_key_id='YOUR_ACCESS_KEY', aws_secret_access_key='YOUR_SECRET_KEY',)
# List bucketsresponse = s3.list_buckets()for bucket in response['Buckets']: print(bucket['Name'])
# Upload a files3.upload_file('local_file.nc', 'my-bucket', 'data/remote_file.nc')
# Download a files3.download_file('my-bucket', 'data/remote_file.nc', 'downloaded_file.nc')
# List objects in a bucketresponse = s3.list_objects_v2(Bucket='my-bucket', Prefix='data/')for obj in response.get('Contents', []): print(obj['Key'])Using s3cmd
Section titled “Using s3cmd”Installation
Section titled “Installation”pip install s3cmdConfiguration
Section titled “Configuration”Create ~/.s3cfg:
[default]access_key = YOUR_ACCESS_KEYsecret_key = YOUR_SECRET_KEYhost_base = storage.oceanum.iohost_bucket = %(bucket)s.storage.oceanum.iouse_https = TrueCommands
Section titled “Commands”# List bucketss3cmd ls
# Upload files3cmd put myfile.nc s3://my-bucket/data/
# Download files3cmd get s3://my-bucket/data/myfile.ncSupported S3 Operations
Section titled “Supported S3 Operations”Oceanum Storage supports the following S3 operations:
| Operation | Supported |
|---|---|
| ListBuckets | Yes |
| CreateBucket | Yes |
| DeleteBucket | Yes |
| ListObjects | Yes |
| GetObject | Yes |
| PutObject | Yes |
| DeleteObject | Yes |
| CopyObject | Yes |
| HeadObject | Yes |
| Multipart Upload | Yes |
| Presigned URLs | Yes |
Multipart Uploads
Section titled “Multipart Uploads”For large files (>100MB), multipart uploads are recommended for reliability. Most S3 tools handle this automatically, but you can configure the threshold:
# AWS CLI - set multipart threshold to 100MBaws configure set s3.multipart_threshold 100MB --profile oceanumPresigned URLs
Section titled “Presigned URLs”Generate temporary URLs to share files without exposing credentials:
url = s3.generate_presigned_url( 'get_object', Params={'Bucket': 'my-bucket', 'Key': 'data/file.nc'}, ExpiresIn=3600 # URL valid for 1 hour)print(url)