为什么并发下载数有限制
本文关键字:有限制 下载 并发 为什么 | 更新日期: 2023-09-27 18:37:15
我正在尝试制作自己的简单网络爬虫。我想从 URL 下载具有特定扩展名的文件。我写了以下代码:
private void button1_Click(object sender, RoutedEventArgs e)
{
if (bw.IsBusy) return;
bw.DoWork += new DoWorkEventHandler(bw_DoWork);
bw.RunWorkerAsync(new string[] { URL.Text, SavePath.Text, Filter.Text });
}
//--------------------------------------------------------------------------------------------
void bw_DoWork(object sender, DoWorkEventArgs e)
{
try
{
ThreadPool.SetMaxThreads(4, 4);
string[] strs = e.Argument as string[];
Regex reg = new Regex("<a(''s*[^>]*?){0,1}''s*href''s*''=''s*'''"([^>]*?)'''"''s*[^>]*>(.*?)</a>", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase);
int i = 0;
string domainS = strs[0];
string Extensions = strs[2];
string OutDir = strs[1];
var domain = new Uri(domainS);
string[] Filters = Extensions.Split(new char[] { ';', ',', ' ' }, StringSplitOptions.RemoveEmptyEntries);
string outPath = System.IO.Path.Combine(OutDir, string.Format("File_{0}.html", i));
WebClient webClient = new WebClient();
string str = webClient.DownloadString(domainS);
str = str.Replace("'r'n", " ").Replace(''n', ' ');
MatchCollection mc = reg.Matches(str);
int NumOfThreads = mc.Count;
Parallel.ForEach(mc.Cast<Match>(), new ParallelOptions { MaxDegreeOfParallelism = 2, },
mat =>
{
string val = mat.Groups[2].Value;
var link = new Uri(domain, val);
foreach (string ext in Filters)
if (val.EndsWith("." + ext))
{
Download((object)new object[] { OutDir, link });
break;
}
});
throw new Exception("Finished !");
}
catch (System.Exception ex)
{
ReportException(ex);
}
finally
{
}
}
//--------------------------------------------------------------------------------------------
private static void Download(object o)
{
try
{
object[] objs = o as object[];
Uri link = (Uri)objs[1];
string outPath = System.IO.Path.Combine((string)objs[0], System.IO.Path.GetFileName(link.ToString()));
if (!File.Exists(outPath))
{
//WebClient webClient = new WebClient();
//webClient.DownloadFile(link, outPath);
DownloadFile(link.ToString(), outPath);
}
}
catch (System.Exception ex)
{
ReportException(ex);
}
}
//--------------------------------------------------------------------------------------------
private static bool DownloadFile(string url, string filePath)
{
try
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
request.UserAgent = "Web Crawler";
request.Timeout = 40000;
WebResponse response = request.GetResponse();
Stream stream = response.GetResponseStream();
using (FileStream fs = new FileStream(filePath, FileMode.CreateNew))
{
const int siz = 1000;
byte[] bytes = new byte[siz];
for (; ; )
{
int count = stream.Read(bytes, 0, siz);
fs.Write(bytes, 0, count);
if (count == 0) break;
}
fs.Flush();
fs.Close();
}
}
catch (System.Exception ex)
{
ReportException(ex);
return false;
}
finally
{
}
return true;
}
问题是,虽然它适用于 2 个并行下载:
new ParallelOptions { MaxDegreeOfParallelism = 2, }
。它不适用于更高程度的并行性,例如:
new ParallelOptions { MaxDegreeOfParallelism = 5, }
。并且我收到连接超时异常。
起初我以为是因为WebClient
:
//WebClient webClient = new WebClient();
//webClient.DownloadFile(link, outPath);
。但是当我用使用该HttpWebRequest
的函数DownloadFile
替换它时,我仍然收到错误。
我已经在许多网页上对其进行了测试,没有任何变化。我还通过chrome的扩展程序"Download Master"确认了这些Web服务器允许多个并行下载。有没有人知道为什么我在尝试并行下载多个文件时会出现超时异常?
您需要
分配ServicePointManager.DefaultConnectionLimit
。与同一主机的默认并发连接数为 2。另请参阅有关使用 web.config connectionManagement
的相关 SO 帖子。
据我所知,IIS 将限制进出的连接总数,但是这个数字应该在 10^3 而不是 ~5 的范围内。
您是否可能正在测试同一个网址?我知道很多 Web 服务器都限制了来自客户端的同时连接数。例如:您是否正在尝试下载 10 份 http://www.google.com 副本进行测试?
如果是这样,您可能需要尝试使用不同站点的列表进行测试,例如:
- http://www.google.com
- http://www.yahoo.com
- http://www.facebook.com
- http://www.bing.com
- http://www.zombo.com