SqlBulkCopy带有GUID外键的XML
本文关键字:XML 带有 GUID SqlBulkCopy | 更新日期: 2023-09-27 18:17:48
我正在尝试使用XML文件和SqlBulkCopy插入数据。目标表是一个时间序列表,设置如下
create table TimeSeries (
Id uniqueidentifier constraint DF_TimeSeries_Id default (newid()) not null,
ObjectId uniqueidentifier not null,
[Date] datetime not null,
Value float(53) not null,
[Type] varchar (4) not null,
[Source] varchar (4) not null,
LastUpdate datetime constraint DF_TimeSeries_LastUpdate default (getdate()) not null,
TypeIndex smallint constraint DF_TimeSeries_TypeIndex default (0) not null,
constraint PK_TimeSeries primary key clustered ([Date] asc, ObjectId asc, [Type] asc, [Source] asc, TypeIndex asc) with (fillfactor = 80)
);
go
create nonclustered index [IX_TimeSeries_ObjectId_Type_Date_Source]
on TimeSeries(ObjectId asc, [Type] asc, [Date] asc, [Source] asc)
include(Value) with (fillfactor = 80);
go
create nonclustered index [IX_TimeSeries_ObjectId_Date]
on TimeSeries(ObjectId asc, [Date] asc)
include(Value) with (fillfactor = 80);
go
create table Beacons
(
BeaconId uniqueidentifier not null default newid(),
[Description] varchar(50) not null,
LocationX float not null,
LocationY float not null,
Altitude float not null
constraint PK_Beacons primary key clustered (BeaconId)
)
go
create index IX_Beacons on Beacons (BeaconId)
go
create table SnowGauges
(
SnowGaugeId uniqueidentifier not null default newid(),
[Description] varchar(50) not null
constraint PK_SnowGauges primary key clustered (SnowGaugeId)
)
go
create index IX_SnowGauges on SnowGauges (SnowGaugeId)
go
insert into Beacons ([Description], LocationX, LocationY, Altitude)
values ('Dunkery', 51.162, -3.586, 519), ('Prestwich', 53.527, -2.279, 76)
insert into SnowGauges ([Description]) values ('Val d''Isère')
select * from Beacons
select * from SnowGauges
如您所见,我想在TimeSeries中存储任何类型的时间序列。这可以是温度、压力、生物数据等。在任何情况下,我都可以通过唯一标识符,来源和类型来识别时间序列。在ObjectId中没有设置外键,因为这个唯一标识符可以引用任何表。
在这个脚本的末尾,我插入了2个信标和一个测雪仪,我想填充它们的时间序列。这样做的XML文件有这样的格式:
<?xml version="1.0" encoding="utf-8" ?>
<TimeSeries>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 07:00:00" Value="9.2" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 10:00:00" Value="8.8" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 13:00:00" Value="8.7" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 07:00:00" Value="1" Source = "Met Office" Type = "UV"/>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 10:00:00" Value="3" Source = "Met Office" Type = "UV"/>
<TimeSeries ObjectId="186CA33E-AC1C-4220-81DE-C7CD32F40C1A" Date="09/06/2013 13:00:00" Value="5" Source = "Met Office" Type = "UV"/>
<TimeSeries ObjectId="AFB81E51-18B0-4696-9C2F-E6E9EEC1B647" Date="09/06/2013 07:00:00" Value="5.8" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="AFB81E51-18B0-4696-9C2F-E6E9EEC1B647" Date="09/06/2013 10:00:00" Value="6.3" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="AFB81E51-18B0-4696-9C2F-E6E9EEC1B647" Date="09/06/2013 13:00:00" Value="6.5" Source = "Met Office" Type = "Temperature"/>
<TimeSeries ObjectId="50E52A2B-D719-4341-A451-110D0874D26D" Date="07/06/2013 00:00:00" Value="80.5" Source = "Meteo France" Type = "SnowMeter"/>
<TimeSeries ObjectId="50E52A2B-D719-4341-A451-110D0874D26D" Date="08/06/2013 00:00:00" Value="80.5" Source = "Meteo France" Type = "SnowMeter"/>
</TimeSeries>
如果您运行第一个脚本,您可能会有不同的ObjectId,并且必须在XML文件中更新它们。因此,从这里开始,一切都应该是直截了当的,一个简单的c#程序应该可以完成插入数据的工作。现在让我们看一下c#代码:
using System;
using System.Data;
using System.Data.SqlClient;
using System.IO;
namespace XMLBulkInsert
{
class Program
{
const string XMLFILE_PATH = @"C:'Workspaces'Ws1'R'TimeSeries'TimeSeries.xml";
const string CONNECTION_STRING = @"Server=RISK1;Database=DevStat;Trusted_Connection=True;";
static void Main(string[] args)
{
StreamReader xmlFile = new StreamReader(XMLFILE_PATH);
DataSet ds = new DataSet();
Console.Write("Read file... ");
ds.ReadXml(xmlFile);
DataTable sourceData = ds.Tables[0];
Console.WriteLine("Done !");
using (SqlConnection sourceConnection = new SqlConnection(CONNECTION_STRING))
{
sourceConnection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sourceConnection.ConnectionString))
{
bulkCopy.ColumnMappings.Add("ObjectId", "ObjectId");
bulkCopy.ColumnMappings.Add("Date", "Date");
bulkCopy.ColumnMappings.Add("Value", "Value");
bulkCopy.ColumnMappings.Add("Source", "Source");
bulkCopy.ColumnMappings.Add("Type", "Type");
bulkCopy.DestinationTableName = "TimeSeries";
try
{
Console.Write("Insert data... ");
bulkCopy.WriteToServer(sourceData);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
xmlFile.Close();
sourceConnection.Close();
}
}
}
Console.WriteLine("Insertion completed, please Press Enter...");
Console.ReadLine();
}
}
}
运行此程序将返回以下异常:"数据源中String类型的给定值不能转换为指定目标列的惟一标识符类型。"当我设置映射时,似乎没有办法强制列是唯一标识符。我甚至试图插入这个代码ds.Tables[0].Columns[0].DataType = typeof(Guid);
,但没有成功,. net不能改变列的类型,一旦表有行数据。
我有高异常与SQlBulkCopy,但现在我觉得有点卡住。我有数以百万计的XML格式数据,但由于这个唯一标识符,我无法插入任何数据。
有人知道如何设置这个类以接受唯一标识符吗?
考虑到大约3亿行的注释,我会忘记关于DataTable
;您不希望一次加载所有数据。理想的情况是逐个元素解析它,将数据作为IDataReader
公开。
幸运的是,存在一些这样的实用程序。首先,让我们解析数据。每行基本上是:
class TimeSeries
{
public Guid ObjectId { get; set; }
public DateTime Date { get; set; }
public string Source { get; set; }
public string Type { get; set; }
public decimal Value { get; set; }
}
,我们可以编写一个基于元素的阅读器,如:
static IEnumerable<TimeSeries> ReadTimeSeries(TextReader source)
{
using (var reader = XmlReader.Create(source, new XmlReaderSettings {
IgnoreWhitespace = true }))
{
reader.MoveToContent();
reader.ReadStartElement("TimeSeries");
while(reader.Read() && reader.NodeType == XmlNodeType.Element
&& reader.Depth == 1)
{
using (var subtree = reader.ReadSubtree())
{
var el = XElement.Load(subtree);
var obj = new TimeSeries
{
ObjectId = (Guid) el.Attribute("ObjectId"),
// note: datetime is not xml format; need to parse - this
// should probably be more explicit
Date = DateTime.Parse((string) el.Attribute("Date")),
Source = (string) el.Attribute("Source"),
Type = (string)el.Attribute("Type"),
Value = (decimal)el.Attribute("Value")
};
yield return obj;
}
}
}
}
注意,这是一个"迭代器块",并且是惰性假脱机——它不会一次加载所有的数据。
接下来,我们需要一个API,可以使用IEnumerable<T>
并将其暴露为IDataReader
- FastMember正是这样做的(以及许多其他事情)。所以我们可以这样写:
using(var bcp = new SqlBulkCopy(connection))
using(var objectReader = ObjectReader.Create(ReadTimeSeries(source)))
{
bcp.DestinationTableName = "SomeTable";
bcp.WriteToServer(objectReader);
}
其中source
是TextReader
,例如来自File.OpenText
:
using(var source = File.OpenText(path))
using(var bcp = new SqlBulkCopy(connection))
using(var objectReader = ObjectReader.Create(ReadTimeSeries(source)))
{
bcp.DestinationTableName = "SomeTable";
bcp.WriteToServer(objectReader);
}
如果你想控制列的顺序,你可以使用bcp.ColumnMappings
——但也许更方便的是让IDataReader
在内部完成:
using(var objectReader = ObjectReader.Create(
ReadTimeSeries(source, "ObjectId", "Date", "Value" /* etc */)))
{
bcp.DestinationTableName = "SomeTable";
bcp.WriteToServer(objectReader);
}
我在自己的一些代码中使用了这种方法——即使数据不适合内存,它也比通过DataTable
快得多。