Location>code7788 >text

Exploring Amazon S3: The Cornerstone of Storage Solutions (Amazon S3 Usage Notes)

Popularity:495 ℃/2024-07-31 15:04:55

Explore Amazon S3: The Cornerstone of Storage Solutions

This article is a derivative of the previous article on minio usage

RELATED: 1./ComfortableM/p/18286363

​ 2./zizai_a/article/details/140796186?spm=1001.2014.3001.5501

introductory

Cloud storage has become an integral part of modern information technology, and it offers a number of conveniences to businesses and individuals. Here are a few key points on why cloud storage is so important:

1. Data security and backup

  • data encryption: Cloud storage providers often offer advanced encryption to ensure that data is secure during transmission and while it is being stored.
  • Backup and Recovery: Cloud storage automatically backs up data and, in the event of a catastrophic event, quickly recovers it, ensuring that business continuity is not compromised.

2. Cost-effectiveness

  • pay as needed: Users can pay for the storage space they actually use, avoiding the costly waste of pre-purchasing large amounts of storage space in traditional storage methods.
  • Reduced O&M costs: Cloud storage reduces an organization's overhead in hardware purchases, maintenance and upgrades, as well as power and cooling costs.

3. Flexibility and scalability

  • Unlimited Expansion: As the amount of data grows, cloud storage makes it easy to expand storage capacity without requiring users to manually add hardware resources.
  • multi-tenant model: Users can easily manage data segregation across different projects or departments without physical constraints.

4. Visits and collaboration

  • remote access: No matter where the user is, as long as there is an Internet connection, they can access the data stored in the cloud.
  • file sharing: Through a simple link sharing mechanism, files can be easily shared among team members to facilitate collaboration.

5. Disaster recovery

  • Multi-geographic replication: Cloud storage services typically offer multi-geographic replication of data, ensuring that data remains available even if one data center fails.
  • Rapid Recovery: When something unexpected happens, cloud storage can recover data quickly, reducing the risk of data loss.

6. Technological innovation and support

  • Latest Technology: Cloud storage providers continually update their technology stacks to ensure that users have access to the latest storage technologies and security measures.
  • Technical Support: A professional technical support team can respond to users' questions and technical problems in a timely manner.

7. Regulatory compliance

  • compliancy: Many cloud storage service providers follow strict regulatory standards to ensure that data is stored in compliance with regional laws and regulations.

8. Contribution to smart cities

  • city management: Cloud storage is critical for collecting, processing and analyzing the large amounts of data generated in smart cities, helping to improve the efficiency of city management and the quality of services.

To summarize, cloud storage has not only changed the way businesses and individuals manage their data, but it has also pushed society as a whole in a more efficient and sustainable direction. As technology advances, we can expect cloud storage to play an even more important role in the future.

Amazon Simple Storage Service (S3) is one of the most mature and widely used object storage services in Amazon Web Services (AWS). Since its launch in 2006, S3 has become an icon in cloud computing, not only laying the foundation for AWS, but also becoming one of the standards for cloud storage worldwide.

Key Points:

  • Maturity and Reliability: After years of operation, S3 has proven its extreme reliability and stability. It is able to handle massive data storage requirements and maintain high availability and durability with a mean time to failure (MTTF) of hundreds of years.
  • Widespread adoption rate: S3 is used by thousands of businesses, including startups, large corporations, and government agencies, and these organizations rely on S3 to store all types of data, from small files to petabyte-level data sets.
  • Rich feature set: S3 offers a range of powerful features such as version control, lifecycle management, data encryption, access control, etc. These features make S3 a flexible and comprehensive storage solution.
  • Integration & Compatibility: S3 is tightly integrated with other AWS services, such as Amazon Elastic Compute Cloud (EC2), Amazon Redshift, and AWS Lambda, enabling users to build complex applications and services inside the AWS ecosystem. In addition, S3 supports a wide range of third-party tools and services, making it a core component of the data processing pipeline.
  • cost-effectiveness: S3 offers a choice of storage categories that allow users to optimize costs based on data access frequency and storage needs. For example, the Standard category is for frequently accessed data, while the Infrequent Access (IA) or Glacier categories are for long-term archived data.
  • technological innovation: S3 continues to introduce new features and enhancements to meet changing market needs. For example, the Intelligent-Tiering storage category helps users save money by automatically moving data to the most appropriate storage tier.
  • Industry Recognition: S3 has received multiple industry awards and certifications that reflect its leadership in cloud storage.

Amazon S3, one of AWS' flagship services, has become the global benchmark for cloud storage over the past decade or so. Whether it's maturity, reliability, or feature richness, S3 is the trusted choice for organizations and developers. As AWS continues to innovate and build on its foundation, there is every reason to believe that S3 will continue to lead the way in cloud storage.

Introduction to Amazon S3

Amazon Simple Storage Service (S3) is an object storage service provided by Amazon Web Services (AWS). Since its launch in 2006, S3 has become an icon in the cloud storage space, providing developers and organizations with a simple, efficient way to store and retrieve any amount of data.

Core features:

  • High availability and persistence: S3 is designed to withstand severe system failures and ensure a high degree of data persistence. Its design goal is a data loss rate of 0.000000000001 (11 nines) per year, meaning that data is virtually never lost.S3 uses multi-copy redundant storage to protect data from component failures, and data is automatically distributed across multiple facilities to prevent regional disasters from affecting data availability.
  • Unlimited scalability: S3's architecture allows for seamless scaling without the need to pre-plan storage capacity. Organizations can store data from GB to EB levels as needed without worrying about storage limitations. This auto-scalability means organizations don't have to worry about manually adjusting their infrastructure when data volumes grow rapidly.
  • cost-effectiveness: S3 offers a variety of storage classes that allow organizations to optimize costs by choosing the most appropriate option based on how often data is accessed. For example, S3 Standard is suitable for frequently accessed data, while S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) are suitable for infrequently accessed data.S3 Intelligent Tiering can S3 Intelligent Tiering automatically detects data access patterns and moves data to the most economical storage tier to further reduce costs.
  • Security and Compliance: S3 supports multiple security features such as server-side encryption, client-side encryption, and access control policies to protect data from unauthorized access. It also supports multiple compliance standards such as HIPAA, FedRAMP, PCI DSS, etc. to help organizations comply with industry regulatory requirements.
  • Easy to manage and integrate: S3 provides an intuitive management console that makes it easy for users to upload, download and manage data. It also provides a rich set of APIs and SDKs so developers can easily integrate S3 into their applications and services.
  • Data Lifecycle Management: With S3 Lifecycle Policies, organizations can automatically migrate data from one storage class to another or delete data after a specified period of time. This automated process helps reduce the administrative burden and ensures that data is always in the most economical storage tier.
  • High performance and global coverage: S3's global edge location network ensures low-latency data access, regardless of where in the world the user is located. This distributed architecture helps improve data availability and responsiveness.

Usage Scenarios:

  • Web Hosting: S3 can be used to host static websites, including HTML pages, CSS stylesheets, JavaScript scripts and images.
  • Data backup and recovery: Organizations can use S3 as part of their data backup and disaster recovery strategy to ensure the security and recoverability of their data.
  • big data processing: S3 can be used as a data lake for big data analysis, storing raw data for subsequent processing and analysis.
  • Media storage and distribution: S3 is suitable for storing and distributing video, audio and other multimedia files.
  • Application Data Storage: S3 can store application-generated log files, cached data, database backups, and more.

Quick access

  • register an account/signup?request_type=register

    S3 has a number of free-to-use packages that allow you to build a small test environment for pre-development.

    After successfully registering and logging into the console, click on the top right corner where the user has a security credential:

    Scroll down to Access Key here and create an access key, this is the credentials to connect to S3. You can also create an IAM account, assign permissions to it, and log into the console with the IAM account to create the access key, which is also recommended for S3.

    Next you can find the S3 free play via the top left service.

Spring boot integration

Adding Dependencies

        <dependency>
            <groupId></groupId>
            <artifactId>s3</artifactId>
        </dependency>

Creating Configuration Classes

import ;
import ;
import ;
import ;
import ;
import ;
import .s3.S3Client;
import .s3.S3Configuration;
import ..S3Presigner;

import ;

@Configuration
public class AmazonS3Config {

    @Value("${aws.}")
    private String accessKeyId;
    @Value("${aws.}")
    private String secretKey;
    @Value("${aws.}")
    private String endPoint;//access points,Unconfigured can be commented

    @Bean
    public S3Client s3Client() {
        AwsBasicCredentials credentials = (accessKeyId, secretKey);

        return ()
                .credentialsProvider((credentials))
                .region(Region.US_EAST_1)//shore
                .endpointOverride((endPoint))//access points
                .serviceConfiguration(()
                        .pathStyleAccessEnabled(true)
                        .chunkedEncodingEnabled(false)
                        .build())
                .build();
    }

    //S3Presigneris used to get the file object pre-signatureurl(used form a nominal expression)
    @Bean
    public S3Presigner s3Presigner() {
        AwsBasicCredentials credentials = (accessKeyId, secretKey);

        return ()
                .credentialsProvider((credentials))
                .region(Region.US_EAST_1)
                .endpointOverride((endPoint))
                .build();
    }
}

Basic operation introduction

Creating Buckets
    public boolean ifExistsBucket(String bucketName) {
        // Try to send a HEAD request to check if the bucket exists
        try {
            HeadBucketResponse headBucketResponse = (().bucket(bucketName).build());
        } catch (S3Exception e) {
            // If the S3 exception is caught and the status code is 404, the bucket does not exist
            if (() == 404) {
                return false; } else {
            } else {
                // Print the exception stack trace
                (); }
            }
        }
        // If no exception is thrown or the status code is not 404, the bucket exists.
        return true; // If no exception is thrown or the status code is not 404, the bucket exists.
    }

    public boolean createBucket(String bucketName) throws RuntimeException {
        // Check if the bucket already exists
        if (ifExistsBucket(bucketName)) {
            // If the bucket already exists, throw a runtime exception
            throw new RuntimeException("Bucket already exists");
        }
        // Create a new bucket
        S3Response bucket = (().bucket(bucketName).build());
        // Check if the bucket was created successfully
        return ifExistsBucket(bucketName);
    }

    public boolean removeBucket(String bucketName) {
        // If the bucket doesn't exist, then return true
        if (!ifExistsBucket(bucketName)) {
            return true; }
        }
        // Delete the bucket
        (().bucket(bucketName).build()); // Remove the bucket.
        // Check if the bucket has been deleted
        return !ifExistsBucket(bucketName);
    }
upload
    public boolean uploadObject(String bucketName, String targetObject, String sourcePath) {
        // Try to upload a local file to the specified bucket and object key.
        try {
            (()
                    .bucket(bucketName)
                    .key(targetObject)
                    .build(), (new File(sourcePath))));
        } catch (Exception e) {
            // If an exception occurs, print the exception stack trace and return false
            (); } catch (Exception e) { // If an exception occurs, print the stack trace and return it.
            return false.
        }
        // If the upload was successful, return true
        return true; }
    }

    public boolean putObject(String bucketName, String object, InputStream inputStream, long size) {
        // Try to upload data from the input stream to the specified bucket and object key.
        try {
            (()
                    .bucket(bucketName)
                    .key(object)
                    .build(), (inputStream, size));
        } catch (Exception e) {
            // If an exception occurs, print the exception stack trace and return false
            (); } catch (Exception e) { // If an exception occurs, print the stack trace and return it.
            return false.
        }
        // If the upload was successful, return true
        return true; }
    }

    public boolean putObject(String bucketName, String object, InputStream inputStream, long size, Map<String, String> tags) {
        // Try to upload data from the input stream to the specified bucket and object key, and set the tags
        try {
            // Convert the tag map to a collection of tags
            Collection<Tag> tagList = ().stream().map(entry ->.
                    ().key(()).value(()).
                    .collect(());
            // Upload the object and set the label
            (())
                    .bucket(bucketName)
                    .key(object)
                    .tagging(()
                            .tagSet(tagList)
                            .build())
                    .build(), (inputStream, size));
        } catch (Exception e) {
            // If an exception occurs, print the exception stack trace and return false
            (); } catch (Exception e) { // If an exception occurs, print the stack trace and return it.
            return false.
        }
        // If the upload was successful, return true
        return true; }
    }
Split Upload
import .s3.S3Client;
import ..*;
import ;
import ;
import ;
import ;

public class S3MultipartUploader {

    private static final S3Client s3Client = ();

    /**
     * Starting a New Split Upload Session。
     *
     * @param bucketName Storage Drum Name
     * @param objectKey object key
     * @return returned UploadId for subsequent operations
     */
    public static InitiateMultipartUploadResponse initiateMultipartUpload(String bucketName, String objectKey) {
        InitiateMultipartUploadRequest request = ()
                .bucket(bucketName)
                .key(objectKey)
                .build();
        return (request);
    }

    /**
     * Upload a slice。
     *
     * @param bucketName Storage Drum Name
     * @param objectKey object key
     * @param uploadId Uploading the session's ID
     * @param partNumber Slice number
     * @param file file
     * @param offset file偏移量
     * @param length file长度
     * @return returned Part ETag Used to validate when completing an upload
     */
    public static UploadPartResponse uploadPart(String bucketName, String objectKey, String uploadId,
                                                int partNumber, File file, long offset, long length) {
        try (FileInputStream fis = new FileInputStream(file)) {
            UploadPartRequest request = ()
                    .bucket(bucketName)
                    .key(objectKey)
                    .uploadId(uploadId)
                    .partNumber(partNumber)
                    .build();
            RequestBody requestBody = (fis, length, offset);
            return (request, requestBody);
        } catch (IOException e) {
            ();
            return null;
        }
    }

    /**
     * Completion of slice upload。
     *
     * @param bucketName Storage Drum Name
     * @param objectKey object key
     * @param uploadId Uploading the session's ID
     * @param parts List of uploaded slices
     */
    public static CompleteMultipartUploadResponse completeMultipartUpload(String bucketName, String objectKey,
                                                                          String uploadId, List<CompletedPart> parts) {
        CompleteMultipartUploadRequest request = ()
                .bucket(bucketName)
                .key(objectKey)
                .uploadId(uploadId)
                .multipartUpload(()
                        .parts(parts)
                        .build())
                .build();
        return (request);
    }

    /**
     * Canceling a Split Upload Session。
     *
     * @param bucketName Storage Drum Name
     * @param objectKey object key
     * @param uploadId Uploading the session's ID
     */
    public static void abortMultipartUpload(String bucketName, String objectKey, String uploadId) {
        AbortMultipartUploadRequest request = ()
                .bucket(bucketName)
                .key(objectKey)
                .uploadId(uploadId)
                .build();
        (request);
    }

    public static void main(String[] args) {
        String bucketName = "your-bucket-name";
        String objectKey = "path/to/your-object";
        File fileToUpload = new File("path/to/your/local/file");

        // Starting a sliced upload session
        InitiateMultipartUploadResponse initResponse = initiateMultipartUpload(bucketName, objectKey);
        String uploadId = ();

        // 计算file大小和分片数量
        long fileSize = ();
        int partSize = 5 * 1024 * 1024; // Assuming that each slice is of size 5MB
        int numberOfParts = (int) ((double) fileSize / partSize);

        // Upload each slice
        List<CompletedPart> completedParts = new ArrayList<>();
        try (FileInputStream fis = new FileInputStream(fileToUpload)) {
            for (int i = 1; i <= numberOfParts; i++) {
                long startOffset = (i - 1) * partSize;
                long currentPartSize = (partSize, fileSize - startOffset);

                // Upload Split
                UploadPartResponse partResponse = uploadPart(bucketName, objectKey, uploadId, i, fileToUpload, startOffset, currentPartSize);
                if (partResponse != null) {
                    // Saving information on completed slices
                    CompletedPart completedPart = ()
                            .partNumber(i)
                            .eTag(())
                            .build();
                    (completedPart);
                }
            }
        } catch (IOException e) {
            ();
        }

        // Completion of slice upload
        CompleteMultipartUploadResponse completeResponse = completeMultipartUpload(bucketName, objectKey, uploadId, completedParts);
        if (completeResponse != null) {
            ("Split Upload Successful!");
        } else {
            ("Split Upload Failure。");
        }
    }
}
downloading
    public boolean downObject(String bucketName, String objectName, String targetPath) {
        // Try to download the object to the specified path
        try {
            (()
                    .bucket(bucketName)
                    .key(objectName)
                    .build(), new File(targetPath).toPath()); } catch (Exception e) {
        } catch (Exception e) {
            // If an exception occurs then print the exception stack trace and return false
            (); } catch (Exception e) { // If an exception occurs then print the stack trace and return it.
            return false; }
        }
        // If the download was successful, return true
        return true; }
    }

    public InputStream getObject(String bucketName, String object) {
        InputStream objectStream = null;
        // Try to get the input stream for the specified bucket and object key.
        try {
            objectStream = (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If an exception occurs, print the exception stack trace and return null
            (); } catch (Exception e) {
            return null;
        }
        // Return the object's input stream
        return objectStream; }
    }

    public ResponseBytes<GetObjectResponse> getObjectAsBytes(String bucketName, String object) {
        ResponseBytes<GetObjectResponse> objectAsBytes = null;
        // Try to get the contents of the specified bucket and object key as bytes
        try {
            objectAsBytes = (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If an exception occurs, print the exception stack trace and return null
            (); } catch (Exception e) {
            return null;
        }
        // Return the contents of the object as bytes
        return objectAsBytes; }
    }
Get object pre-signed url
    public String presignedURLofObject(String bucketName, String object, int expire) {
        URL url = null;
        // Trying to generate a pre-signature URL
        try {
            url = (()
                    .signatureDuration((expire))
                    .getObjectRequest(()
                            .bucket(bucketName)
                            .key(object)
                            .build())
                    .build()).url();
        } catch (Exception e) {
            // If an exception occurs print the exception stack trace and return the null
            ();
            return null;
        } finally {
            // Disable pre-signed clients
            ();
        }
        // Return pre-signature URL The string representation of the
        return ();
    }

Get the url can be directly in the browser preview, but to use other plug-ins (such as ) preview, then there will be a cross-domain error, this time you need to access the bucket to add a cors rule.

Code to add cors rules:

    CORSRule corsRule = ()
            .allowedOrigins("http://ip:ports")//You can set multiple
        	//.allowedOrigins("*")
            .allowedHeaders("Authorization")
            .allowedMethods("GET","HEAD")
            .exposeHeaders("Access-Control-Allow-Origin").build();

    // set up CORS configure
    (()
            .bucket(bucketName)
            .corsConfiguration(()
                    .corsRules(corsRule)
                    .build())
            .build());

S3 console to add CORS rules:

Click on your bucket to select permissions, scroll down to find this setting and edit to add the rule below:

[
    {
        "AllowedHeaders": [
            "Authorization"
        ],
        "AllowedMethods": [
            "GET",
            "HEAD"
        ],
        "AllowedOrigins": [
            "http://ip:ports"
        ],
        "ExposeHeaders": [
            "Access-Control-Allow-Origin"
        ],
        "MaxAgeSeconds": 3000
    }
]

Add with S3 Browser

S3 Browser is a third-party tool for managing Amazon S3 storage services. It provides a graphical user interface (GUI) that makes it easier for users to upload, download, manage, and browse objects and storage buckets stored in Amazon S3. It can also be used to connect to minio or other storage.

Add your configuration Wait for the bucket name to be scanned out, right click on the bucket name and select CORS Configuration to add your rule, click apply to use it.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="/doc/2006-03-01/">
  <CORSRule>
    <AllowedOrigin>http://ip:ports</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
    <AllowedHeader>Authorization</AllowedHeader>
  </CORSRule>
</CORSConfiguration>

Partial methodology set

Because it's an extended version of another project, some of the methods were instead changed to be a bit more cumbersome.

S3Service
import ;
import ;
import .;
import .;
import .;
import ..S3Object;

import ;
import ;
import ;

/**
 * Definition and AWS S3 General Functions of Client Interaction。
 */
@Service
public interface S3Service {

    /**
     * Determines if the specified bucket exists。
     *
     * @param bucketName Storage Drum Name。
     * @return Returns if the bucket exists true,Otherwise, return false。
     */
    boolean ifExistsBucket(String bucketName);

    /**
     * Create a new storage bucket。If the bucket already exists,then a runtime exception is thrown。
     *
     * @param bucketName 新建的Storage Drum Name。
     * @return Returns if the bucket was created successfully true。
     * @throws RuntimeException If the bucket already exists。
     */
    boolean createBucket(String bucketName) throws RuntimeException;

    /**
     * Deleting a bucket。The storage bucket must be empty;Otherwise it won't be deleted。
     * Returns the specified bucket even if it does not exist. true。
     *
     * @param bucketName 要删除的Storage Drum Name。
     * @return If the bucket is deleted or does not exist return true。
     */
    boolean removeBucket(String bucketName);

    /**
     * List the current S3 All storage buckets present on the server。
     *
     * @return Storage Bucket List。
     */
    List<Bucket> alreadyExistBuckets();

    /**
     * Listing Objects in a Bucket,Optionally filter by prefix and specify whether or not to include subdirectories。
     *
     * @param bucketName Storage Drum Name。
     * @param predir     Prefix Filter Criteria。
     * @param recursive  Whether to include subdirectories。
     * @return S3 object list。
     */
    List<S3Object> listObjects(String bucketName, String predir, boolean recursive);

    /**
     * Listing Objects in a Bucket,Filter by prefix。
     *
     * @param bucketName Storage Drum Name。
     * @param predir     Prefix Filter Criteria。
     * @return S3 object list。
     */
    List<S3Object> listObjects(String bucketName, String predir);

    /**
     * Copying an object file from one bucket to another。
     *
     * @param pastBucket The bucket where the source file is located。
     * @param pastObject Path to the source file in the storage bucket。
     * @param newBucket  Target bucket to be copied into。
     * @param newObject  Target bucket to be copied into内的trails。
     * @return If the copy succeeds return true。
     */
    boolean copyObject(String pastBucket, String pastObject, String newBucket, String newObject);

    /**
     * Download an object。
     *
     * @param bucketName Storage Drum Name。
     * @param objectName Object path and name(for example:2022/02/02/)。
     * @param targetPath Target path and name(for example:/opt/)。
     * @return If the download is successful return true。
     */
    boolean downObject(String bucketName, String objectName, String targetPath);

    /**
     * Returns the signature of the object URL。
     *
     * @param bucketName Storage Drum Name。
     * @param object     Object path and name。
     * @param expire     expiration date (of document)(minutes)。
     * @return sign (one's name with a pen etc) URL。
     */
    String presignedURLofObject(String bucketName, String object, int expire);

    /**
     * 返回带有additional parameter的对象sign (one's name with a pen etc) URL。
     *
     * @param bucketName Storage Drum Name。
     * @param object     Object path and name。
     * @param expire     expiration date (of document)(minutes)。
     * @param map        additional parameter。
     * @return sign (one's name with a pen etc) URL。
     */
    String presignedURLofObject(String bucketName, String object, int expire, Map<String, String> map);

    /**
     * Deleting an object。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name(trails)。
     * @return Returns if the deletion was successful true。
     */
    boolean deleteObject(String bucketName, String object);

    /**
     * Uploading an object,Using local files as sources。
     *
     * @param bucketName Storage Drum Name。
     * @param targetObject Name of the target object。
     * @param sourcePath   本地文件trails(for example:/opt/)。
     * @return Returns if the upload was successful true。
     */
    boolean uploadObject(String bucketName, String targetObject, String sourcePath);

    /**
     * Uploading an object,Using the input stream as a source。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @param inputStream input flow。
     * @param size       Object Size。
     * @return Returns if the upload was successful true。
     */
    boolean putObject(String bucketName, String object, InputStream inputStream, long size);

    /**
     * Upload an object with a label。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @param inputStream input flow。
     * @param size       Object Size。
     * @param tags       Tag Collection。
     * @return Returns if the upload was successful true。
     */
    boolean putObject(String bucketName, String object, InputStream inputStream, long size, Map<String, String> tags);

    /**
     * Getting an object。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @return GetObjectResponse 类型的input flow。
     */
    InputStream getObject(String bucketName, String object);

    /**
     * Getting an object的内容作为字节流。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @return Byte stream containing the contents of the object。
     */
    ResponseBytes<GetObjectResponse> getObjectAsBytes(String bucketName, String object);

    /**
     * Determine if a file exists in a bucket。
     *
     * @param bucketName Storage Drum Name。
     * @param filename   Name of the document。
     * @param recursive  Whether to search recursively。
     * @return Returns if the file exists true。
     */
    boolean fileifexist(String bucketName, String filename, boolean recursive);

    /**
     * Getting an object的标签。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @return Tag Collection。
     */
    Map<String, String> getTags(String bucketName, String object);

    /**
     * Adding labels to objects in a bucket。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @param addTags    Tags to add。
     * @return Returns if added successfully true。
     */
    boolean addTags(String bucketName, String object, Map<String, String> addTags);

    /**
     * Getting object information and metadata about an object。
     *
     * @param bucketName Storage Drum Name。
     * @param object     object name。
     * @return HeadObjectResponse Object information and metadata for types。
     */
    HeadObjectResponse statObject(String bucketName, String object);

    /**
     * Determine if an object exists。
     *
     * @param bucketName Storage Drum Name。
     * @param objectName object name。
     * @return If the object exists return true。
     */
    boolean ifExistObject(String bucketName, String objectName);

    /**
     * 从其他object name中获取Metadata name。
     *
     * @param objectName object name。
     * @return Metadata name。
     */
    String getMetaNameFromOther(String objectName);

    /**
     * Changing the label of an object。
     *
     * @param object object name。
     * @param tag    New labels。
     * @return Returns if the change was successful true。
     */
    boolean changeTag(String object, String tag);

    /**
     * Setting storage buckets for public access。
     *
     * @param bucketName Storage Drum Name。
     */
    void BucketAccessPublic(String bucketName);

}
S3ServiceImpl
import ;
import ;
import .S3Service;
import ;
import ;
import ;
import ;
import ;
import ;
import .s3.S3Client;
import ..*;
import ..S3Presigner;
import .;

import ;
import ;
import ;
import ;
import ;
import ;
import .*;
import ;
import ;

@Service
public class S3ServiceImpl implements S3Service {
    private final Logger log = (().getName());
    @Qualifier("s3Client")
    @Autowired
    S3Client s3Client;

    @Qualifier("s3Presigner")
    @Autowired
    S3Presigner s3Presigner;
    
    @Override
    public boolean ifExistsBucket(String bucketName) {
        // Trying to send HEAD request to check if the bucket exists
        try {
            HeadBucketResponse headBucketResponse = (().bucket(bucketName).build());
        } catch (S3Exception e) {
            // If the capture of the S3 Exception with status code 404,then the bucket does not exist
            if (() == 404) {
                return false;
            } else {
                // Printing an Exception Stack Trace
                ();
            }
        }
        // If no exception is thrown or the status code is not 404,then it means that the storage bucket exists
        return true;
    }

    @Override
    public boolean createBucket(String bucketName) throws RuntimeException {
        // Check if the storage bucket already exists
        if (ifExistsBucket(bucketName)) {
            // If the bucket already exists,then a runtime exception is thrown
            throw new RuntimeException("Bucket already exists");
        }
        // Creating a new storage bucket
        S3Response bucket = (().bucket(bucketName).build());
        // Check if the bucket was created successfully
        return ifExistsBucket(bucketName);
    }

    @Override
    public boolean removeBucket(String bucketName) {
        // If the bucket does not exist,then it returns directly to the true
        if (!ifExistsBucket(bucketName)) {
            return true;
        }
        // Deleting buckets
        (().bucket(bucketName).build());
        // Check if the storage bucket has been deleted
        return !ifExistsBucket(bucketName);
    }

    @Override
    public List<Bucket> alreadyExistBuckets() {
        // List all existing storage buckets
        List<Bucket> buckets = ().buckets();
        return buckets;
    }

    @Override
    public boolean fileifexist(String bucketName, String filename, boolean recursive) {
        // Initialize a boolean to mark the file as existing or not
        boolean flag = false;
        // Constructing the first ListObjectsV2 requesting
        ListObjectsV2Request request = ().bucket(bucketName).build();
        ListObjectsV2Response response;
        // Loop through the process until all paged data has been retrieved
        do {
            // 发送requesting并获取响应
            response = s3Client.listObjectsV2(request);
            // Iterate through the contents of the response
            for (S3Object content : ()) {
                // If a matching filename is found,then set the flag bit to true
                if (().equals(filename)) {
                    flag = true;
                    break;
                }
            }
            // construct (sth abstract)下一次requesting,If the response is truncated,Then continue to get the next page of data
            request = ()
                    .bucket(bucketName)
                    .continuationToken(())
                    .build();
        } while (());
        // Returns whether the file exists
        return flag;
    }

    @Override
    public List<S3Object> listObjects(String bucketName, String predir, boolean recursive) {
        // construct (sth abstract) ListObjects requesting以列出具有指定前缀的文件
        List<S3Object> contents = (()
                .bucket(bucketName)
                .prefix(predir)
                .maxKeys(1000)
                .build()).contents();
        return contents;
    }

    @Override
    public List<S3Object> listObjects(String bucketName, String predir) {
        // construct (sth abstract) ListObjects requesting以列出具有指定前缀的文件
        List<S3Object> contents = (()
                .bucket(bucketName)
                .prefix(predir)
                .maxKeys(1000)
                .build()).contents();
        return contents;
    }

    @Override
    public boolean copyObject(String pastBucket, String pastObject, String newBucket, String newObject) {
        // Trying to copy an object
        try {
            CopyObjectResponse copyObjectResponse = (()
                    .sourceBucket(pastBucket)
                    .sourceKey(pastObject)
                    .destinationBucket(newBucket)
                    .destinationKey(newObject)
                    .build());
        } catch (Exception e) {
            // If there is an anomaly则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the copy was successful true
        return true;
    }

    @Override
    public boolean downObject(String bucketName, String objectName, String targetPath) {
        // Try to download the object to the specified path
        try {
            (()
                    .bucket(bucketName)
                    .key(objectName)
                    .build(), new File(targetPath).toPath());
        } catch (Exception e) {
            // If there is an anomaly则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the download was successful true
        return true;
    }

    @Override
    public String presignedURLofObject(String bucketName, String object, int expire) {
        URL url = null;
        // Trying to generate a pre-signature URL
        try {
            url = (()
                    .signatureDuration((expire))
                    .getObjectRequest(()
                            .bucket(bucketName)
                            .key(object)
                            .build())
                    .build()).url();
        } catch (Exception e) {
            // If there is an anomaly则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        } finally {
            // Disable pre-signed clients
            ();
        }
        // Return pre-signature URL The string representation of the
        return ();
    }

    @Override
    public String presignedURLofObject(String bucketName, String object, int expire, Map<String, String> map) {
        URL url = null;
        // Trying to generate a pre-signature URL
        try {
            url = (()
                    .signatureDuration((expire))
                    .getObjectRequest(()
                            .bucket(bucketName)
                            .key(object)
                            .build())
                    .build()).url();
        } catch (Exception e) {
            // If there is an anomaly则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        } finally {
            // Disable pre-signed clients
            ();
        }
        // Return pre-signature URL The string representation of the
        return ();
    }

    @Override
    public boolean deleteObject(String bucketName, String object) {
        // Attempting to delete an object
        try {
            (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If there is an anomaly则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the deletion was successful true
        return true;
    }
    
    @Override
    public boolean uploadObject(String bucketName, String targetObject, String sourcePath) {
        // Attempts to upload a local file to a specified bucket and object key
        try {
            (()
                    .bucket(bucketName)
                    .key(targetObject)
                    .build(), (new File(sourcePath)));
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the upload was successful true
        return true;
    }

    @Override
    public boolean putObject(String bucketName, String object, InputStream inputStream, long size) {
        // Attempts to upload data from the input stream to the specified bucket and object key
        try {
            (()
                    .bucket(bucketName)
                    .key(object)
                    .build(), (inputStream, size));
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the upload was successful true
        return true;
    }

    @Override
    public boolean putObject(String bucketName, String object, InputStream inputStream, long size, Map<String, String> tags) {
        // Attempts to upload data from the input stream to the specified bucket and object key,and set the label
        try {
            // Converting a label map to a collection of labels
            Collection<Tag> tagList = ().stream().map(entry ->
                    ().key(()).value(()).build())
                    .collect(());
            // 上传对象and set the label
            (()
                    .bucket(bucketName)
                    .key(object)
                    .tagging(()
                            .tagSet(tagList)
                            .build())
                    .build(), (inputStream, size));
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the upload was successful true
        return true;
    }

    @Override
    public InputStream getObject(String bucketName, String object) {
        InputStream objectStream = null;
        // Attempts to fetch the input stream for the specified bucket and object key
        try {
            objectStream = (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        }
        // Returns the input stream of the object
        return objectStream;
    }

    @Override
    public ResponseBytes<GetObjectResponse> getObjectAsBytes(String bucketName, String object) {
        ResponseBytes<GetObjectResponse> objectAsBytes = null;
        // Try to get the contents of the specified bucket and object key as bytes
        try {
            objectAsBytes = (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        }
        // Returns the contents of the object as bytes
        return objectAsBytes;
    }

    @Override
    public Map<String, String> getTags(String bucketName, String object) {
        Map<String, String> tags = null;
        // Attempts to get the label of the specified bucket and object key
        try {
            List<Tag> tagList = (()
                    .bucket(bucketName)
                    .key(object)
                    .build()).tagSet();
            // Converting a collection of tags to a tag map
            tags = ().collect((Tag::key, Tag::value));
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        }
        // Return to Label Mapping
        return tags;
    }

    @Override
    public boolean addTags(String bucketName, String object, Map<String, String> addTags) {
        Map<String, String> oldTags = new HashMap<>();
        Map<String, String> newTags = new HashMap<>();
        // Get existing labels
        try {
            oldTags = getTags(bucketName, object);
            if (() > 0) {
                (oldTags);
            }
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并提示没有旧标签
            ();
            ("Original storage drums without aged labels");
        }
        // Add new tag
        if (addTags != null && () > 0) {
            (addTags);
        }
        // Converting a new collection of tags into a list of tags
        Collection<Tag> tagList = ().stream().map(entry ->
                ().key(()).value(()).build())
                .collect(());
        // Setting up a new label
        try {
            (()
                    .bucket(bucketName)
                    .key(object)
                    .tagging(()
                            .tagSet(tagList).build())
                    .build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the setting is successful true
        return true;
    }

    @Override
    public HeadObjectResponse statObject(String bucketName, String object) {
        HeadObjectResponse headObjectResponse = null;
        // Attempts to fetch metadata for the specified bucket and object key
        try {
            headObjectResponse = (()
                    .bucket(bucketName)
                    .key(object)
                    .build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 null
            ();
            return null;
        }
        // Returns the metadata of the object
        return headObjectResponse;
    }

    @Override
    public boolean ifExistObject(String bucketName, String objectName) {
        // Checks for the existence of an object with the specified bucket and object key
        return listObjects(bucketName, objectName, true).size() >= 1;
    }

    @Override
    public String getMetaNameFromOther(String objectName) {
        String metaObject = "";
        // Get a list of objects with a specific prefix in a metadata store bucket
        List<S3Object> s3Objects = listObjects(, (objectName), true);
        if (() == 1) {
            try {
                // Get the key of the first object and get its label
                metaObject = (0).key();
                Map<String, String> tags = getTags(, metaObject);
                // Encoding filename tags
                String fileName = ();
                return (fileName, "UTF-8");
            } catch (UnsupportedEncodingException e) {
                // If there is an anomaly,则Printing an Exception Stack Trace
                ();
            }
        }
        // If the object is not found or the encoding fails,then the original file name is returned
        return (metaObject);
    }

    @Override
    public boolean changeTag(String object, String tag) {
        // Trying to change the label of a specified object
        try {
            (()
                    .bucket()
                    .key(object)
                    .tagging(()
                            .tagSet(()
                                    .key()
                                    .value(tag)
                                    .build())
                            .build())
                    .build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace并返回 false
            ();
            return false;
        }
        // Returns if the change was successful true
        return true;
    }

    @Override
    public void BucketAccessPublic(String bucketName) {
        // Setting Storage Bucket Policies for Public Access
        String config = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"*\"]},\"Action\":[\"s3:ListBucketMultipartUploads\",\"s3:GetBucketLocation\",\"s3:ListBucket\"],\"Resource\":[\"arn:aws:s3:::" + bucketName + "\"]},{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"*\"]},\"Action\":[\"s3:ListMultipartUploadParts\",\"s3:PutObject\",\"s3:AbortMultipartUpload\",\"s3:DeleteObject\",\"s3:GetObject\"],\"Resource\":[\"arn:aws:s3:::" + bucketName + "/*\"]}]}";
        try {
            // Applying Storage Bucket Policies
            (()
                    .bucket(bucketName)
                    .policy(config).build());
        } catch (Exception e) {
            // If there is an anomaly,则Printing an Exception Stack Trace
            ();
        }
    }
}