c#实时上传从客户端到服务器到amazon s3

本文关键字:服务器 amazon s3 客户端 实时 | 更新日期: 2023-09-27 18:01:27

早上好,我有一个桌面应用程序,它将文件上传到WCF服务,然后WCF服务上传到Amazon S3。

这是我的WCF方法,它接收文件并上传到S3。

public void UploadFile(RemoteFileInfo request)
        {
            config = new AmazonS3Config();
            config.CommunicationProtocol = Protocol.HTTP;
            accessKeyID = "XXXXXXX"; 
            secretAccessKeyID = "YYYYYYYY";
            client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKeyID, secretAccessKeyID, config);
            int chunkSize = 2048;
            byte[] buffer = new byte[chunkSize];
            using (System.IO.MemoryStream writeStream = new System.IO.MemoryStream())
            {
                do
                {
                    // read bytes from input stream
                    int bytesRead = request.FileByteStream.Read(buffer, 0, chunkSize);
                    if (bytesRead == 0) break;
                    // simulates slow connection
                    System.Threading.Thread.Sleep(3);
                    // write bytes to output stream
                    writeStream.Write(buffer, 0, bytesRead);
                } while (true);
                // report end
                Console.WriteLine("Done!");
                // start the uploading to S3
                PutObjectRequest fileRequest = new PutObjectRequest();
                fileRequest.WithInputStream(writeStream);
                fileRequest.Key = "testfile.pdf";
                fileRequest.WithBucketName("tempbucket");
                fileRequest.CannedACL = S3CannedACL.Private;
                fileRequest.StorageClass = S3StorageClass.Standard;
                client.PutObject(fileRequest);
                writeStream.Close();
            }
        }

在我的客户端上,当我把文件上传到WCF服务时,我得到了实时的进度,但是当我得到100%完成时,这并不意味着文件已经上传到S3,所以我想知道是否有可能将文件上传到S3,而我正在写流(使用

的内部)
(System.IO.MemoryStream writeStream = new System.IO.MemoryStream())
            {

这可能吗?有什么指导方针吗?

提前致谢

c#实时上传从客户端到服务器到amazon s3

您可以使用PutObjectRequestInputStream属性

public void UploadFile(RemoteFileInfo request)
{
    config = new AmazonS3Config();
    config.CommunicationProtocol = Protocol.HTTP;
    accessKeyID = "XXXXXXX"; 
    secretAccessKeyID = "YYYYYYYY";
    client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKeyID,secretAccessKeyID,config);
    int chunkSize = 2048;
    byte[] buffer = new byte[chunkSize];
    PutObjectRequest fileRequest = new PutObjectRequest();
    fileRequest.Key = "testfile.pdf";
    fileRequest.WithBucketName("tempbucket");
    fileRequest.CannedACL = S3CannedACL.Private;
    fileRequest.StorageClass = S3StorageClass.Standard;
    using (fileRequest.InputStream = new System.IO.MemoryStream())
    {
      do
      {
         // read bytes from input stream
         int bytesRead = request.FileByteStream.Read(buffer, 0, chunkSize);
         if (bytesRead == 0) break;
         // simulates slow connection
         System.Threading.Thread.Sleep(3);
         // write bytes to output stream
         fileRequest.InputStream.Write(buffer, 0, bytesRead);
       } while (true);
       // report end
       Console.WriteLine("Done!");
       client.PutObject(fileRequest);
    }
 }

我建议将文件以块而不是流的形式上传到WCF。我这样做了,它工作得很好。您还需要返回写入amazon的实际字节的消息,稍后您可以根据该消息增加进度条。我知道这将导致您在客户端应用程序中编写while循环,但它将帮助您以100%的准确性显示大文件的进度。您的WCF函数应该接受如下参数

[DataContract]
class RemoteFileInfo
{
    [DataMember]
    Byte[] myChunk;
    [DataMember]
    long myOffset;
    // other stuff you think you need to be sent each time.
}