Windows API似乎比BinaryWriter快得多-我的测试正确吗?

本文关键字:测试 我的 快得多 API BinaryWriter Windows | 更新日期: 2023-09-27 18:11:42

[EDIT]

感谢@VilleKrumlinde,我已经修复了我之前在试图避免代码分析警告时不小心引入的错误。我不小心打开了"重叠"文件处理,它一直在重置文件长度。现在这个问题已经解决了,你可以对同一个流多次调用FastWrite(),而不会出现任何问题。

(结束编辑)


我正在做一些计时测试,以比较两种不同的方式将结构数组写入磁盘。我认为,与其他事情相比,I/O成本如此之高,不值得花太多时间优化其他事情。

然而,我的计时测试似乎表明并非如此。要么我犯了一个错误(这完全有可能),要么我的优化真的很重要。

首先是一些历史:这个FastWrite()方法最初是在几年前编写的,用于支持将结构体写入旧c++程序所使用的文件,并且我们仍然为此目的使用它。(也有相应的FastRead()方法。)编写它主要是为了更容易地将位元结构数组写入文件,其速度是次要考虑的问题。

不止一个人告诉我,这样的优化并不比仅仅使用BinaryWriter快多少,所以我最终咬紧牙塞,进行了一些计时测试。结果让我大吃一惊……

显示我的FastWrite()方法比使用BinaryWriter的等效方法快30 - 50倍。这看起来很荒谬,所以我把我的代码贴在这里,看看是否有人能找到错误。

系统规范

  • 测试x86版本,从外部调试器运行。
  • 运行于Windows 8, x64, 16GB内存
  • 在普通硬盘(非SSD)上运行
  • 在Visual Studio 2012中使用。net 4(因此安装了。net 4.5)
结果

结果如下:

SlowWrite() took 00:00:02.0747141
FastWrite() took 00:00:00.0318139
SlowWrite() took 00:00:01.9205158
FastWrite() took 00:00:00.0327242
SlowWrite() took 00:00:01.9289878
FastWrite() took 00:00:00.0321100
SlowWrite() took 00:00:01.9374454
FastWrite() took 00:00:00.0316074

正如你所看到的,这似乎表明FastWrite()在那次运行中快了50倍。

这是我的测试代码。在运行测试之后,我对两个文件进行了二进制比较,以验证它们确实是相同的(即FastWrite()SlowWrite()生成相同的文件)。

看看你能做什么。:)

using System;
using System.ComponentModel;
using System.Diagnostics;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading;
using Microsoft.Win32.SafeHandles;
namespace ConsoleApplication1
{
    internal class Program
    {
        [StructLayout(LayoutKind.Sequential, Pack = 1)]
        struct TestStruct
        {
            public byte   ByteValue;
            public short  ShortValue;
            public int    IntValue;
            public long   LongValue;
            public float  FloatValue;
            public double DoubleValue;
        }
        static void Main()
        {
            Directory.CreateDirectory("C:''TEST");
            string filename1 = "C:''TEST''TEST1.BIN";
            string filename2 = "C:''TEST''TEST2.BIN";
            int count = 1000;
            var array = new TestStruct[10000];
            for (int i = 0; i < array.Length; ++i)
                array[i].IntValue = i;
            var sw = new Stopwatch();
            for (int trial = 0; trial < 4; ++trial)
            {
                sw.Restart();
                using (var output = new FileStream(filename1, FileMode.Create))
                using (var writer = new BinaryWriter(output, Encoding.Default, true))
                {
                    for (int i = 0; i < count; ++i)
                    {
                        output.Position = 0;
                        SlowWrite(writer, array, 0, array.Length);
                    }
                }
                Console.WriteLine("SlowWrite() took " + sw.Elapsed);
                sw.Restart();
                using (var output = new FileStream(filename2, FileMode.Create))
                {
                    for (int i = 0; i < count; ++i)
                    {
                        output.Position = 0;
                        FastWrite(output, array, 0, array.Length);
                    }
                }
                Console.WriteLine("FastWrite() took " + sw.Elapsed);
            }
        }
        static void SlowWrite(BinaryWriter writer, TestStruct[] array, int offset, int count)
        {
            for (int i = offset; i < offset + count; ++i)
            {
                var item = array[i];  // I also tried just writing from array[i] directly with similar results.
                writer.Write(item.ByteValue);
                writer.Write(item.ShortValue);
                writer.Write(item.IntValue);
                writer.Write(item.LongValue);
                writer.Write(item.FloatValue);
                writer.Write(item.DoubleValue);
            }
        }
        static void FastWrite<T>(FileStream fs, T[] array, int offset, int count) where T: struct
        {
            int sizeOfT = Marshal.SizeOf(typeof(T));
            GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
            try
            {
                uint bytesWritten;
                uint bytesToWrite = (uint)(count * sizeOfT);
                if
                (
                    !WriteFile
                    (
                        fs.SafeFileHandle,
                        new IntPtr(gcHandle.AddrOfPinnedObject().ToInt64() + (offset*sizeOfT)),
                        bytesToWrite,
                        out bytesWritten,
                        IntPtr.Zero
                    )
                )
                {
                    throw new IOException("Unable to write file.", new Win32Exception(Marshal.GetLastWin32Error()));
                }
                Debug.Assert(bytesWritten == bytesToWrite);
            }
            finally
            {
                gcHandle.Free();
            }
        }
        [DllImport("kernel32.dll", SetLastError=true)]
        [return: MarshalAs(UnmanagedType.Bool)]
        private static extern bool WriteFile
        (
            SafeFileHandle hFile,
            IntPtr         lpBuffer,
            uint           nNumberOfBytesToWrite,
            out uint       lpNumberOfBytesWritten,
            IntPtr         lpOverlapped
        );
    }
}

跟进

我还测试了@ErenErsönmez提出的代码,如下所示(并且在测试结束时验证了所有三个文件是相同的):

static void ErenWrite<T>(FileStream fs, T[] array, int offset, int count) where T : struct
{
    // Note: This doesn't use 'offset' or 'count', but it could easily be changed to do so,
    // and it doesn't change the results of this particular test program.
    int size = Marshal.SizeOf(typeof(TestStruct)) * array.Length;
    var bytes = new byte[size];
    GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
    try
    {
        var ptr = new IntPtr(gcHandle.AddrOfPinnedObject().ToInt64());
        Marshal.Copy(ptr, bytes, 0, size);
        fs.Write(bytes, 0, size);
    }
    finally
    {
        gcHandle.Free();
    }
}

我为该代码添加了一个测试,同时删除了output.Position = 0;行,因此文件现在增长到263K(这是一个合理的大小)。

有了这些变化,结果是:

注意看看FastWrite()时间有多慢,当你一直将文件指针重置为零!:

SlowWrite() took 00:00:01.9929327
FastWrite() took 00:00:00.1152534
ErenWrite() took 00:00:00.2185131
SlowWrite() took 00:00:01.8877979
FastWrite() took 00:00:00.2087977
ErenWrite() took 00:00:00.2191266
SlowWrite() took 00:00:01.9279477
FastWrite() took 00:00:00.2096208
ErenWrite() took 00:00:00.2102270
SlowWrite() took 00:00:01.7823760
FastWrite() took 00:00:00.1137891
ErenWrite() took 00:00:00.3028128
所以看起来你可以使用封送处理而不需要使用Windows API来达到几乎相同的速度。唯一的缺点是,even的方法必须复制整个结构数组,如果内存有限,这可能是一个问题。

Windows API似乎比BinaryWriter快得多-我的测试正确吗?

我不认为差异与BinaryWriter有关。我认为这是因为你在SlowWrite(10000 * 6)中做多个文件IOs,而在FastWrite中做单个IO。FastWrite的优点是有一个字节准备写入文件。另一方面,在SlowWrite中,您将承担将结构体逐个转换为字节数组的打击。

为了测试这个理论,我写了一个小方法,预先构建了一个所有结构体的大字节数组,然后在SlowWrite中使用这个字节数组:

static byte[] bytes;
static void Prep(TestStruct[] array)
{
    int size = Marshal.SizeOf(typeof(TestStruct)) * array.Length;
    bytes = new byte[size];
    GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
    var ptr = gcHandle.AddrOfPinnedObject();
    Marshal.Copy(ptr, bytes, 0, size);
    gcHandle.Free();
}
static void SlowWrite(BinaryWriter writer)
{
    writer.Write(bytes);
}

结果:

SlowWrite() took 00:00:00.0360392
FastWrite() took 00:00:00.0385015
SlowWrite() took 00:00:00.0358703
FastWrite() took 00:00:00.0381371
SlowWrite() took 00:00:00.0373875
FastWrite() took 00:00:00.0367692
SlowWrite() took 00:00:00.0348295
FastWrite() took 00:00:00.0373931

请注意,SlowWrite现在的性能与FastWrite非常相似,我认为这表明性能差异不是由于实际的IO性能,而是与二进制转换过程有关。