MinIO
Author: p | 2025-04-24
Windows MiniOS. MiniOS XP Pro SP3. MiniOS 7 Pro SP1 x86/x64. MiniOS 8 Pro x86. MiniOS 8 Pro x64. MiniOS 10 LTSB. MiniOS 10 LTSC. MiniOS H2. Mas MiniOS (Contrase a: www.dprojects.org) Rufus USB. Publicadas por OMG MAS TODO a la/s 3:48 p.m. Enviar esto por correo electr nico BlogThis!
minio/docs/minio-limits.md at master minio/minio - GitHub
Specifies the strategy used to replace old Pods by new ones # Refer: type: Recreate template: metadata: labels: # This label is used as a selector in Service definition app: minio spec: # Volumes used by this deployment volumes: - name: data # This volume is based on PVC persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pvc-claim containers: - name: minio # Volume mounts for this container volumeMounts: # Volume 'data' is mounted to path '/data' - name: data mountPath: /data # Pulls the latest Minio image from Docker Hub image: minio/minio args: - server - /data env: # MinIO access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 # Readiness probe detects situations when MinIO server instance # is not ready to accept traffic. Kubernetes doesn't forward # traffic to the pod while readiness checks fail. readinessProbe: httpGet: path: /minio/health/ready port: 9000 initialDelaySeconds: 120 periodSeconds: 20 # Liveness probe detects situations where MinIO server instance # is not working properly and needs restart. Kubernetes automatically # restarts the pods if liveness checks fail. livenessProbe: httpGet: path: /minio/health/live port: 9000 initialDelaySeconds: 120 periodSeconds: 20Apply the configuration file.kubectl create -f Minio-Dep.ymlVerify if the pod is running:# kubectl get podsNAME READY STATUS RESTARTS AGEminio-7b555749d4-cdj47 1/1 Running 0 22sFurthermore, the PV should be bound at this moment.# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmy-local-pv 4Gi RWX Retain Bound default/minio-pvc-claim my-local-storage 4m42s5. Deploy the MinIO ServiceWe will create a service to expose port 9000. The service can be deployed as NodePort, ClusterIP, or load balancer.Create the service file as below:vim Minio-svc.ymlAdd the lines below to the file.apiVersion: v1kind: Servicemetadata: # This name uniquely identifies the service name: minio-servicespec: type: LoadBalancer ports: - name: http port: 9000 targetPort: 9000 protocol: TCP selector: # Looks for labels `app:minio` in the namespace and applies the spec app: minioApply the settings:kubectl create -f Minio-svc.ymlVerify if the service is running:# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 15mminio-service LoadBalancer 10.103.101.128 9000:32278/TCP 27s6. Access the MinIO Web UIAt This point, the MinIO service has been exposed on port 32278, proceed and access the web UI using the URL the set MinIO access and secret key to log in. On successful authentication, you should see the MinIO web console as below.Create a bucket say test bucket.Upload files to the created bucket.The uploaded file will appear in the bucket as below.You can as well set the bucket policy.7. Manage MinIO using MC clientMinIO Client is a tool used to manage the MinIO Server by providing UNIX commands such as ls, rm, cat, mv, mirror, cp e.t.c. The MinIO Client is installed using binaries as below.##For amd64wget ppc64wget the file to your path and make it executable:sudo cp mc /usr/local/bin/sudo chmod +x /usr/local/bin/mcVerify the installation.$ mc --versionmc version RELEASE.2022-02-16T05-54-01ZOnce installed, connect to the MinIO server with the syntax.mc alias set [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE]For this guide, the command will be:mc alias set minio minio minio123 --api S3v4Sample Output:Remember to specify the right port for the MinIO server. You can use the IP_address of any node on the cluster.Once connected, list all the buckets using the command:mc ls play minioSample Output:You can list files in a bucket say test bucket with the command:$ mc ls play minio/test[2022-03-16 04:07:15 EDT] 0B 00000qweqwe/[2022-03-16 05:31:53 EDT] 0B 000tut/[2022-03-18 07:50:35 EDT] 0B 001267test/[2022-03-16 21:03:34 EDT] 0B 3f66b017508b449781b927e876bbf640/[2022-03-16 03:20:13 EDT] 0B 6210d9e5011632646d9b2abb/[2022-03-16 07:05:02 EDT] 0B 622f997eb0a7c5ce72f6d199/[2022-03-17 08:46:05 EDT] 0B 85x8nbntobfws58ue03fam8o5cowbfd3/[2022-03-16 14:59:37 EDT] 0B 8b437f27dbac021c07d9af47b0b58290/[2022-03-17 21:29:33 EDT] 0B abc/.....[2022-03-16 11:55:55 EDT] 0B zips/[2022-03-17 11:05:01 EDT] 0B zips202203/[2022-03-18 09:18:36 EDT] 262KiB STANDARD Install cPanel|WHM on AlmaLinux with Let's Encrypt 7.pngCreate a new bucket using the syntax:mc mb minio/For example, creating a bucket with the name testbucket1$ mc mb minio/testbucket1Bucket created successfully `minio/testbucket1`.The bucket will be available in the console.In case you need help when using the MinIO client, get help using the command:$ mc --helpNAME: mc - MinIO Client for cloud storage and filesystems.USAGE: mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]COMMANDS: alias manage server credentials in configuration file ls list buckets and objects mb make a bucket rb remove a bucket cp copy objects mv move objects rmminio/helm/minio/README.md at master minio/minio - GitHub
8.0.5 • Public • Published 11 hours ago ReadmeCode Beta14 Dependencies587 Dependents82 VersionsMinIO JavaScript Library for Amazon S3 Compatible Cloud Storage The MinIO JavaScript Client SDK provides high level APIs to access any Amazon S3 compatible object storage server.This guide will show you how to install the client SDK and execute an example JavaScript program.For a complete list of APIs and examples, please take a look at the JavaScript Client API Reference documentation.This document presumes you have a working Node.js development environment, LTS versions v16, v18 or v20.Download from NPMDownload from Sourcegit clone minio-jsnpm installnpm run buildnpm install -gUsing with TypeScriptminio>7.1.0 is shipped with builtin type definition, @types/minio is no longer needed.Initialize MinIO ClientThe following parameters are needed to connect to a MinIO object storage server:ParameterDescriptionendPointHostname of the object storage service.portTCP/IP port number. Optional, defaults to 80 for HTTP and 443 for HTTPs.accessKeyAccess key (user ID) of an account in the S3 service.secretKeySecret key (password) of an account in the S3 service.useSSLOptional, set to 'true' to enable secure (HTTPS) access.import * as Minio from 'minio'const minioClient = new Minio.Client({ endPoint: 'play.min.io', port: 9000, useSSL: true, accessKey: 'Q3AM3UQ867SPQQA43P2F', secretKey: 'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG',})Quick Start Example - File UploaderThis example connects to an object storage server, creates a bucket, and uploads a file to the bucket.It uses the MinIO play server, a public MinIO cluster located at play server runs the latest stable version of MinIO and may be used for testing and development.The access credentials shown in this example are open to the public.All. Windows MiniOS. MiniOS XP Pro SP3. MiniOS 7 Pro SP1 x86/x64. MiniOS 8 Pro x86. MiniOS 8 Pro x64. MiniOS 10 LTSB. MiniOS 10 LTSC. MiniOS H2. Mas MiniOS (Contrase a: www.dprojects.org) Rufus USB. Publicadas por OMG MAS TODO a la/s 3:48 p.m. Enviar esto por correo electr nico BlogThis! linux minios xfce What is MiniOS?MiniOS is a lightweight and fast Linux distribution designed for installation on a USB drive. The MiniOS project was launminio/.github/logo.svg at master minio/minio GitHub
How to use Cloud Explorer with Minio In this recipe you will learn how to carry out basic operations on Minio using Cloud Explorer.Cloud Explorer is a open-source S3 client. It works on Windows, Linux, and Mac. It has a graphical and command line interface for each supported operating system. If you have a feature suggestion or find a bug, please open an issue.FeaturesSearchPerformance testingMigrate buckets between S3 accountsSimple text editorSync foldersCreate snapshots of bucketsPrerequisitesCloud Explorer is installed and running.Minio Server is running on localhost on port 9000 in HTTP, follow Minio quickstart guide to install Minio.StepsAdd your Minio account to Cloud Explorer and click save.Click on the Minio account and then the "Load" button to connect. In the future, clicking on a saved S3 account will automatically load the account and show the buckets.Create a bucketUpload a file to a bucketClick on the Magifing glass and then click "Refresh Bucket" to view the uploaded fileExplore FurtherMinio Client complete guideCloud Explorer homepageLinux-toys.com K3sFor this guide, I have configured 3 worker nodes and a single control plane in my cluster.# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster1 Ready control-plane 3m v1.26.5node1 Ready 60s v1.26.5node2 Ready 60s v1.26.5node3 Ready 60s v1.26.51. Create a StorageClass with WaitForFirstConsumerThe WaitForFirstConsumer Binding Mode will be used to assign the volumeBindingMode to a persistent volume. Create the storage class as below.vim storageClass.ymlIn the file, add the below lines.kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: my-local-storageprovisioner: kubernetes.io/no-provisionervolumeBindingMode: WaitForFirstConsumerCreate the pod.$ kubectl create -f storageClass.ymlstorageclass.storage.k8s.io/my-local-storage created2. Create Local Persistent VolumeFor this guide, we will create persistent volume on the local machines(nodes) using the storage class above.The persistent volume will be created as below:vim minio-pv.ymlAdd the lines below to the fileapiVersion: v1kind: PersistentVolumemetadata: name: my-local-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-local-storage local: path: /mnt/disk/vol1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1Here I have created a persistent volume on node1. Go to node1 and create the volume as below.DIRNAME="vol1"sudo mkdir -p /mnt/disk/$DIRNAME sudo chcon -Rt svirt_sandbox_file_t /mnt/disk/$DIRNAMEsudo chmod 777 /mnt/disk/$DIRNAMENow on the master node, create the pod as below.kubectl create -f minio-pv.yml3. Create Persistent Volume ClaimNow we will create a persistent volume claim and reference it to the created storageClass.vim minio-pvc.ymlThe file will contain the below information.apiVersion: v1kind: PersistentVolumeClaimmetadata: # This name uniquely identifies the PVC. This is used in deployment. name: minio-pvc-claimspec: # Read more about access modes here: storageClassName: my-local-storage accessModes: # The volume is mounted as read-write by Multiple nodes - ReadWriteMany resources: # This is the request for storage. Should be available in the cluster. requests: storage: 10GiCreate the PVC.kubectl create -f minio-pvc.ymlAt this point, the PV should be available as below:# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmy-local-pv 4Gi RWX Retain Available my-local-storage 96s4. Create the MinIO PodThis is the main deployment, we will use the Minio Image and PVC created. Create the file as below:vim Minio-Dep.ymlThe file will have the below content:apiVersion: apps/v1kind: Deploymentmetadata: # This name uniquely identifies the Deployment name: miniospec: selector: matchLabels: app: minio # has to match .spec.template.metadata.labels strategy: #How to configure Minio with CDN? minio minio - GitHub
MTLS. If this flag is provided, then http_proxy.cert_file must also be specified. [$BAZEL_REMOTE_HTTP_PROXY_KEY_FILE] --http_proxy.cert_file value Path to a certificate used to authenticate with the proxy backend using mTLS. If this flag is provided, then http_proxy.key_file must also be specified. [$BAZEL_REMOTE_HTTP_PROXY_CERT_FILE] --http_proxy.ca_file value Path to a certificate autority used to validate the http proxy backend certificate. [$BAZEL_REMOTE_HTTP_PROXY_CA_FILE] --gcs_proxy.bucket value The bucket to use for the Google Cloud Storage proxy backend. [$BAZEL_REMOTE_GCS_BUCKET] --gcs_proxy.use_default_credentials Whether or not to use authentication for the Google Cloud Storage proxy backend. (default: false) [$BAZEL_REMOTE_GCS_USE_DEFAULT_CREDENTIALS] --gcs_proxy.json_credentials_file value Path to a JSON file that contains Google credentials for the Google Cloud Storage proxy backend. [$BAZEL_REMOTE_GCS_JSON_CREDENTIALS_FILE] --ldap.url value The LDAP URL which may include a port. LDAP over SSL (LDAPs) is also supported. Note that this feature is currently considered experimental. [$BAZEL_REMOTE_LDAP_URL] --ldap.base_dn value The distinguished name of the search base. [$BAZEL_REMOTE_LDAP_BASE_DN] --ldap.bind_user value The user who is allowed to perform a search within the base DN. If none is specified the connection and the search is performed without an authentication. It is recommended to use a read-only account. [$BAZEL_REMOTE_LDAP_BIND_USER] --ldap.bind_password value The password of the bind user. [$BAZEL_REMOTE_LDAP_BIND_PASSWORD] --ldap.username_attribute value The user attribute of a connecting user. (default: "uid") [$BAZEL_REMOTE_LDAP_USER_ATTRIBUTE] --ldap.groups_query value Filter clause for searching groups. [$BAZEL_REMOTE_LDAP_GROUPS_QUERY] --ldap.cache_time value The amount of time to cache a successful authentication in seconds. (default: 3600) [$BAZEL_REMOTE_LDAP_CACHE_TIME] --s3.endpoint value The S3/minio endpoint to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_ENDPOINT] --s3.bucket value The S3/minio bucket to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_BUCKET] --s3.bucket_lookup_type value The S3/minio bucket lookup type to use when using S3 proxy backend. Allowed values: auto, dns, path. (default: "auto") [$BAZEL_REMOTE_S3_BUCKET_LOOKUP_TYPE] --s3.prefix value The S3/minio object prefix to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_PREFIX] --s3.auth_method value The S3/minio authentication method. This argument is required when an s3 proxy backend is used. Allowed values: iam_role, access_key, aws_credentials_file. [$BAZEL_REMOTE_S3_AUTH_METHOD] --s3.access_key_id value The S3/minio access key to use when using S3 proxy backend. Applies to s3 auth method(s): access_key. [$BAZEL_REMOTE_S3_ACCESS_KEY_ID] --s3.secret_access_key value The S3/minio secret access key to use when using S3 proxy backend. Applies to s3 auth method(s): access_key.MinIO and Quickwit - MinIO Blog
Migrating MinIO Data from CORE to SCALE!Make sure S3 MinIO data is backed up as a precaution. The migration process from the S3 service requires first migrating to the MinIO plugin in TrueNAS CORE, migrating from CORE to SCALE, then installing the SCALE MinIO app and importing S3 data.Back up any critical data.Download your system configuration file and a debug file.After updating to the latest publicly-available release of CORE and making any changes to CORE user accounts or any other settings download these files and keep them in a safe place and where you can access them if you need to revert to CORE with a clean install using the CORE iso file.After completing the steps that apply to your CORE system listed above, download the SCALE ISO file and save it to your computer.Burn the iso to a USB drive (see Installing on Physical Hardware in Installing SCALE) when upgrading a physical system.. Windows MiniOS. MiniOS XP Pro SP3. MiniOS 7 Pro SP1 x86/x64. MiniOS 8 Pro x86. MiniOS 8 Pro x64. MiniOS 10 LTSB. MiniOS 10 LTSC. MiniOS H2. Mas MiniOS (Contrase a: www.dprojects.org) Rufus USB. Publicadas por OMG MAS TODO a la/s 3:48 p.m. Enviar esto por correo electr nico BlogThis! linux minios xfce What is MiniOS?MiniOS is a lightweight and fast Linux distribution designed for installation on a USB drive. The MiniOS project was launMiniOS: The goal of MiniOS is to provide
MinIO is a high-performance S3 compliant distributed object storage. It is the only 100% open-source storage tool available on every public and private cloud, Kubernetes distribution, and the edge. The MinIO storage system is able to run on minimal CPU and memory resources as well as give maximum performance.The MinIO storage is dominant in traditional object storage with several use cases like secondary storage, archiving, and data recovery. One of the main features that make it suitable for this use is its ability to overcome challenges associated with machine learning, native cloud applications workloads, and analytics.Other amazing features associated with MinIO are:Identity Management– it supports most advanced standards in identity management, with the ability to integrate with the OpenID connect compatible providers as well as key external IDP vendors.Monitoring – It offers detailed performance analysis with metrics and per-operation logging.Encryption – It supports multiple, sophisticated server-side encryption schemes to protect data ensuring integrity, confidentiality, and authenticity with a negligible performance overheadHigh performance – it is the fastest object storage with the GET/PUT throughput of 325 and 165 GiB/sec respectively on just 32 nodes of NVMe.Architecture – MinIO is cloud native and light weight and can also run as containers managed by external orchestration services such as Kubernetes. It is efficient to run on low CPU and memory resources and therefore allowing one to co-host a large number of tenants on shared hardware.Data life cycle management and Tiering – this protects the data within and accross both public and private clouds.Continuous Replication – It is designed for large scale, cross data center deployments thus curbing the challenge with traditional replication approaches that do not scale effectively beyond a few hundred TiBBy following this guide, you should be able to deploy and manage MinIO Storage clusters on Kubernetes.This guide requires one to have a Kubernetes cluster set up. Below are dedicated guides to help you set up a Kubernetes cluster.Install Kubernetes Cluster on Ubuntu with kubeadmDeploy Kubernetes Cluster on Linux With k0sInstall Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-ORun Kubernetes on Debian with MinikubeInstall Kubernetes Cluster on Ubuntu usingComments
Specifies the strategy used to replace old Pods by new ones # Refer: type: Recreate template: metadata: labels: # This label is used as a selector in Service definition app: minio spec: # Volumes used by this deployment volumes: - name: data # This volume is based on PVC persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pvc-claim containers: - name: minio # Volume mounts for this container volumeMounts: # Volume 'data' is mounted to path '/data' - name: data mountPath: /data # Pulls the latest Minio image from Docker Hub image: minio/minio args: - server - /data env: # MinIO access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 # Readiness probe detects situations when MinIO server instance # is not ready to accept traffic. Kubernetes doesn't forward # traffic to the pod while readiness checks fail. readinessProbe: httpGet: path: /minio/health/ready port: 9000 initialDelaySeconds: 120 periodSeconds: 20 # Liveness probe detects situations where MinIO server instance # is not working properly and needs restart. Kubernetes automatically # restarts the pods if liveness checks fail. livenessProbe: httpGet: path: /minio/health/live port: 9000 initialDelaySeconds: 120 periodSeconds: 20Apply the configuration file.kubectl create -f Minio-Dep.ymlVerify if the pod is running:# kubectl get podsNAME READY STATUS RESTARTS AGEminio-7b555749d4-cdj47 1/1 Running 0 22sFurthermore, the PV should be bound at this moment.# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmy-local-pv 4Gi RWX Retain Bound default/minio-pvc-claim my-local-storage 4m42s5. Deploy the MinIO ServiceWe will create a service to expose port 9000. The service can be deployed as NodePort, ClusterIP, or load balancer.Create the service file as below:vim Minio-svc.ymlAdd the lines below to the file.apiVersion: v1kind: Servicemetadata: # This name uniquely identifies the service name: minio-servicespec: type: LoadBalancer ports: - name: http port: 9000 targetPort: 9000 protocol: TCP selector: # Looks for labels `app:minio` in the namespace and applies the spec app: minioApply the settings:kubectl create -f Minio-svc.ymlVerify if the service is running:# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 15mminio-service LoadBalancer 10.103.101.128 9000:32278/TCP 27s6. Access the MinIO Web UIAt
2025-04-06This point, the MinIO service has been exposed on port 32278, proceed and access the web UI using the URL the set MinIO access and secret key to log in. On successful authentication, you should see the MinIO web console as below.Create a bucket say test bucket.Upload files to the created bucket.The uploaded file will appear in the bucket as below.You can as well set the bucket policy.7. Manage MinIO using MC clientMinIO Client is a tool used to manage the MinIO Server by providing UNIX commands such as ls, rm, cat, mv, mirror, cp e.t.c. The MinIO Client is installed using binaries as below.##For amd64wget ppc64wget the file to your path and make it executable:sudo cp mc /usr/local/bin/sudo chmod +x /usr/local/bin/mcVerify the installation.$ mc --versionmc version RELEASE.2022-02-16T05-54-01ZOnce installed, connect to the MinIO server with the syntax.mc alias set [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE]For this guide, the command will be:mc alias set minio minio minio123 --api S3v4Sample Output:Remember to specify the right port for the MinIO server. You can use the IP_address of any node on the cluster.Once connected, list all the buckets using the command:mc ls play minioSample Output:You can list files in a bucket say test bucket with the command:$ mc ls play minio/test[2022-03-16 04:07:15 EDT] 0B 00000qweqwe/[2022-03-16 05:31:53 EDT] 0B 000tut/[2022-03-18 07:50:35 EDT] 0B 001267test/[2022-03-16 21:03:34 EDT] 0B 3f66b017508b449781b927e876bbf640/[2022-03-16 03:20:13 EDT] 0B 6210d9e5011632646d9b2abb/[2022-03-16 07:05:02 EDT] 0B 622f997eb0a7c5ce72f6d199/[2022-03-17 08:46:05 EDT] 0B 85x8nbntobfws58ue03fam8o5cowbfd3/[2022-03-16 14:59:37 EDT] 0B 8b437f27dbac021c07d9af47b0b58290/[2022-03-17 21:29:33 EDT] 0B abc/.....[2022-03-16 11:55:55 EDT] 0B zips/[2022-03-17 11:05:01 EDT] 0B zips202203/[2022-03-18 09:18:36 EDT] 262KiB STANDARD Install cPanel|WHM on AlmaLinux with Let's Encrypt 7.pngCreate a new bucket using the syntax:mc mb minio/For example, creating a bucket with the name testbucket1$ mc mb minio/testbucket1Bucket created successfully `minio/testbucket1`.The bucket will be available in the console.In case you need help when using the MinIO client, get help using the command:$ mc --helpNAME: mc - MinIO Client for cloud storage and filesystems.USAGE: mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]COMMANDS: alias manage server credentials in configuration file ls list buckets and objects mb make a bucket rb remove a bucket cp copy objects mv move objects rm
2025-04-038.0.5 • Public • Published 11 hours ago ReadmeCode Beta14 Dependencies587 Dependents82 VersionsMinIO JavaScript Library for Amazon S3 Compatible Cloud Storage The MinIO JavaScript Client SDK provides high level APIs to access any Amazon S3 compatible object storage server.This guide will show you how to install the client SDK and execute an example JavaScript program.For a complete list of APIs and examples, please take a look at the JavaScript Client API Reference documentation.This document presumes you have a working Node.js development environment, LTS versions v16, v18 or v20.Download from NPMDownload from Sourcegit clone minio-jsnpm installnpm run buildnpm install -gUsing with TypeScriptminio>7.1.0 is shipped with builtin type definition, @types/minio is no longer needed.Initialize MinIO ClientThe following parameters are needed to connect to a MinIO object storage server:ParameterDescriptionendPointHostname of the object storage service.portTCP/IP port number. Optional, defaults to 80 for HTTP and 443 for HTTPs.accessKeyAccess key (user ID) of an account in the S3 service.secretKeySecret key (password) of an account in the S3 service.useSSLOptional, set to 'true' to enable secure (HTTPS) access.import * as Minio from 'minio'const minioClient = new Minio.Client({ endPoint: 'play.min.io', port: 9000, useSSL: true, accessKey: 'Q3AM3UQ867SPQQA43P2F', secretKey: 'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG',})Quick Start Example - File UploaderThis example connects to an object storage server, creates a bucket, and uploads a file to the bucket.It uses the MinIO play server, a public MinIO cluster located at play server runs the latest stable version of MinIO and may be used for testing and development.The access credentials shown in this example are open to the public.All
2025-03-28How to use Cloud Explorer with Minio In this recipe you will learn how to carry out basic operations on Minio using Cloud Explorer.Cloud Explorer is a open-source S3 client. It works on Windows, Linux, and Mac. It has a graphical and command line interface for each supported operating system. If you have a feature suggestion or find a bug, please open an issue.FeaturesSearchPerformance testingMigrate buckets between S3 accountsSimple text editorSync foldersCreate snapshots of bucketsPrerequisitesCloud Explorer is installed and running.Minio Server is running on localhost on port 9000 in HTTP, follow Minio quickstart guide to install Minio.StepsAdd your Minio account to Cloud Explorer and click save.Click on the Minio account and then the "Load" button to connect. In the future, clicking on a saved S3 account will automatically load the account and show the buckets.Create a bucketUpload a file to a bucketClick on the Magifing glass and then click "Refresh Bucket" to view the uploaded fileExplore FurtherMinio Client complete guideCloud Explorer homepageLinux-toys.com
2025-04-03K3sFor this guide, I have configured 3 worker nodes and a single control plane in my cluster.# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster1 Ready control-plane 3m v1.26.5node1 Ready 60s v1.26.5node2 Ready 60s v1.26.5node3 Ready 60s v1.26.51. Create a StorageClass with WaitForFirstConsumerThe WaitForFirstConsumer Binding Mode will be used to assign the volumeBindingMode to a persistent volume. Create the storage class as below.vim storageClass.ymlIn the file, add the below lines.kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: my-local-storageprovisioner: kubernetes.io/no-provisionervolumeBindingMode: WaitForFirstConsumerCreate the pod.$ kubectl create -f storageClass.ymlstorageclass.storage.k8s.io/my-local-storage created2. Create Local Persistent VolumeFor this guide, we will create persistent volume on the local machines(nodes) using the storage class above.The persistent volume will be created as below:vim minio-pv.ymlAdd the lines below to the fileapiVersion: v1kind: PersistentVolumemetadata: name: my-local-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-local-storage local: path: /mnt/disk/vol1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1Here I have created a persistent volume on node1. Go to node1 and create the volume as below.DIRNAME="vol1"sudo mkdir -p /mnt/disk/$DIRNAME sudo chcon -Rt svirt_sandbox_file_t /mnt/disk/$DIRNAMEsudo chmod 777 /mnt/disk/$DIRNAMENow on the master node, create the pod as below.kubectl create -f minio-pv.yml3. Create Persistent Volume ClaimNow we will create a persistent volume claim and reference it to the created storageClass.vim minio-pvc.ymlThe file will contain the below information.apiVersion: v1kind: PersistentVolumeClaimmetadata: # This name uniquely identifies the PVC. This is used in deployment. name: minio-pvc-claimspec: # Read more about access modes here: storageClassName: my-local-storage accessModes: # The volume is mounted as read-write by Multiple nodes - ReadWriteMany resources: # This is the request for storage. Should be available in the cluster. requests: storage: 10GiCreate the PVC.kubectl create -f minio-pvc.ymlAt this point, the PV should be available as below:# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmy-local-pv 4Gi RWX Retain Available my-local-storage 96s4. Create the MinIO PodThis is the main deployment, we will use the Minio Image and PVC created. Create the file as below:vim Minio-Dep.ymlThe file will have the below content:apiVersion: apps/v1kind: Deploymentmetadata: # This name uniquely identifies the Deployment name: miniospec: selector: matchLabels: app: minio # has to match .spec.template.metadata.labels strategy: #
2025-04-04MTLS. If this flag is provided, then http_proxy.cert_file must also be specified. [$BAZEL_REMOTE_HTTP_PROXY_KEY_FILE] --http_proxy.cert_file value Path to a certificate used to authenticate with the proxy backend using mTLS. If this flag is provided, then http_proxy.key_file must also be specified. [$BAZEL_REMOTE_HTTP_PROXY_CERT_FILE] --http_proxy.ca_file value Path to a certificate autority used to validate the http proxy backend certificate. [$BAZEL_REMOTE_HTTP_PROXY_CA_FILE] --gcs_proxy.bucket value The bucket to use for the Google Cloud Storage proxy backend. [$BAZEL_REMOTE_GCS_BUCKET] --gcs_proxy.use_default_credentials Whether or not to use authentication for the Google Cloud Storage proxy backend. (default: false) [$BAZEL_REMOTE_GCS_USE_DEFAULT_CREDENTIALS] --gcs_proxy.json_credentials_file value Path to a JSON file that contains Google credentials for the Google Cloud Storage proxy backend. [$BAZEL_REMOTE_GCS_JSON_CREDENTIALS_FILE] --ldap.url value The LDAP URL which may include a port. LDAP over SSL (LDAPs) is also supported. Note that this feature is currently considered experimental. [$BAZEL_REMOTE_LDAP_URL] --ldap.base_dn value The distinguished name of the search base. [$BAZEL_REMOTE_LDAP_BASE_DN] --ldap.bind_user value The user who is allowed to perform a search within the base DN. If none is specified the connection and the search is performed without an authentication. It is recommended to use a read-only account. [$BAZEL_REMOTE_LDAP_BIND_USER] --ldap.bind_password value The password of the bind user. [$BAZEL_REMOTE_LDAP_BIND_PASSWORD] --ldap.username_attribute value The user attribute of a connecting user. (default: "uid") [$BAZEL_REMOTE_LDAP_USER_ATTRIBUTE] --ldap.groups_query value Filter clause for searching groups. [$BAZEL_REMOTE_LDAP_GROUPS_QUERY] --ldap.cache_time value The amount of time to cache a successful authentication in seconds. (default: 3600) [$BAZEL_REMOTE_LDAP_CACHE_TIME] --s3.endpoint value The S3/minio endpoint to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_ENDPOINT] --s3.bucket value The S3/minio bucket to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_BUCKET] --s3.bucket_lookup_type value The S3/minio bucket lookup type to use when using S3 proxy backend. Allowed values: auto, dns, path. (default: "auto") [$BAZEL_REMOTE_S3_BUCKET_LOOKUP_TYPE] --s3.prefix value The S3/minio object prefix to use when using S3 proxy backend. [$BAZEL_REMOTE_S3_PREFIX] --s3.auth_method value The S3/minio authentication method. This argument is required when an s3 proxy backend is used. Allowed values: iam_role, access_key, aws_credentials_file. [$BAZEL_REMOTE_S3_AUTH_METHOD] --s3.access_key_id value The S3/minio access key to use when using S3 proxy backend. Applies to s3 auth method(s): access_key. [$BAZEL_REMOTE_S3_ACCESS_KEY_ID] --s3.secret_access_key value The S3/minio secret access key to use when using S3 proxy backend. Applies to s3 auth method(s): access_key.
2025-04-22