在一个网站内抓取所有网页的最快方法

本文关键字:网页 方法 抓取 网站 一个 | 更新日期: 2023-09-27 17:59:10

我有一个C#应用程序,它需要尽可能快地在某个域中抓取许多页面。我有一个Parallel.Foreach,它循环遍历所有url(多线程),并使用下面的代码进行抓取:

private string ScrapeWebpage(string url, DateTime? updateDate)
        {
            HttpWebRequest request = null;
            HttpWebResponse response = null;
            Stream responseStream = null;
            StreamReader reader = null;
            string html = null;
            try
            {
                //create request (which supports http compression)
                request = (HttpWebRequest)WebRequest.Create(url);
                request.Pipelined = true;
                request.KeepAlive = true;
                request.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate");
                if (updateDate != null)
                    request.IfModifiedSince = updateDate.Value;
                //get response.
                response = (HttpWebResponse)request.GetResponse();
                responseStream = response.GetResponseStream();
                if (response.ContentEncoding.ToLower().Contains("gzip"))
                    responseStream = new GZipStream(responseStream, CompressionMode.Decompress);
                else if (response.ContentEncoding.ToLower().Contains("deflate"))
                    responseStream = new DeflateStream(responseStream, CompressionMode.Decompress);
                //read html.
                reader = new StreamReader(responseStream, Encoding.Default);
                html = reader.ReadToEnd();
            }
            catch
            {
                throw;
            }
            finally
            {//dispose of objects.
                request = null;
                if (response != null)
                {
                    response.Close();
                    response = null;
                }
                if (responseStream != null)
                {
                    responseStream.Close();
                    responseStream.Dispose();
                }
                if (reader != null)
                {
                    reader.Close();
                    reader.Dispose();
                }
            }
            return html;
        }

正如您所看到的,我支持http压缩,并将request.keepalive和request.pipelined设置为true。我想知道我使用的代码是否是在同一网站内抓取多个网页的最快方法,或者是否有更好的方法可以保持会话对多个请求的开放。我的代码正在为我点击的每个页面创建一个新的请求实例,我是否应该尝试只使用一个请求实例来点击所有页面?启用流水线和保活是否理想?

在一个网站内抓取所有网页的最快方法

原来我缺少的是:

ServicePointManager.DefaultConnectionLimit = 1000000;