为什么我在Throw上得到了空异常

本文关键字:异常 Throw 为什么 | 更新日期: 2023-09-27 18:06:16

代码为:

catch (WebException ex)
            {
                failed = true;
                wccfg.failedUrls++;
                return csFiles;
            }
            catch (Exception ex)
            {
                failed = true;
                wccfg.failedUrls++;
                throw;
            }

抛出异常;异常消息为:NullReferenceException:对象引用未设置为对象的实例

System.NullReferenceException was unhandled by user code
  HResult=-2147467261
  Message=Object reference not set to an instance of an object.
  Source=GatherLinks
  StackTrace:
       at GatherLinks.TimeOut.getHtmlDocumentWebClient(String url, Boolean useProxy, String proxyIp, Int32 proxyPort, String usename, String password) in d:'C-Sharp'GatherLinks'GatherLinks-2'GatherLinks'GatherLinks'TimeOut.cs:line 55
       at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:'C-Sharp'GatherLinks'GatherLinks-2'GatherLinks'GatherLinks'WebCrawler.cs:line 151
       at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:'C-Sharp'GatherLinks'GatherLinks-2'GatherLinks'GatherLinks'WebCrawler.cs:line 151
       at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:'C-Sharp'GatherLinks'GatherLinks-2'GatherLinks'GatherLinks'WebCrawler.cs:line 151
       at GatherLinks.BackgroundWebCrawling.secondryBackGroundWorker_DoWork(Object sender, DoWorkEventArgs e) in d:'C-Sharp'GatherLinks'GatherLinks-2'GatherLinks'GatherLinks'BackgroundWebCrawling.cs:line 82
       at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e)
       at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
  InnerException: 

这是在WebCrawler函数内的try代码:

public List<string> webCrawler(string mainUrl, int levels)
        {
            busy.WaitOne();

            HtmlWeb hw = new HtmlWeb();
            List<string> webSites;
            List<string> csFiles = new List<string>();
            csFiles.Add("temp string to know that something is happening in level = " + levels.ToString());
            csFiles.Add("current site name in this level is : " + mainUrl);
            try
            {
                HtmlAgilityPack.HtmlDocument doc = TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", "");
                done = true;
                Object[] temp_arr = new Object[8];
                temp_arr[0] = csFiles;
                temp_arr[1] = mainUrl;
                temp_arr[2] = levels;
                temp_arr[3] = currentCrawlingSite;
                temp_arr[4] = sitesToCrawl;
                temp_arr[5] = done;
                temp_arr[6] = wccfg.failedUrls;
                temp_arr[7] = failed;
                OnProgressEvent(temp_arr);

                currentCrawlingSite.Add(mainUrl);
                webSites = getLinks(doc);
                removeDupes(webSites);
                removeDuplicates(webSites, currentCrawlingSite);
                removeDuplicates(webSites, sitesToCrawl);
                if (wccfg.removeext == true)
                {
                    for (int i = 0; i < webSites.Count; i++)
                    {
                        webSites.Remove(removeExternals(webSites,mainUrl,wccfg.localy));
                    }
                }
                if (wccfg.downloadcontent == true)
                {
                     retwebcontent.retrieveImages(mainUrl); 
                }
                if (levels > 0)
                    sitesToCrawl.AddRange(webSites);

                if (levels == 0)
                {
                    return csFiles;
                }
                else
                {

                    for (int i = 0; i < webSites.Count(); i++)
                    {

                        if (wccfg.toCancel == true)
                        {
                            return new List<string>();
                        }
                        string t = webSites[i];
                        if ((t.StartsWith("http://") == true) || (t.StartsWith("https://") == true)) 
                        {
                            csFiles.AddRange(webCrawler(t, levels - 1));
                        }
                    }
                    return csFiles;
                }

            }
            catch (WebException ex)
            {
                failed = true;
                wccfg.failedUrls++;
                return csFiles;
            }
            catch (Exception ex)
            {
                failed = true;
                wccfg.failedUrls++;
                throw;
            }
        }

这是我如何使用wccfg在类的顶部:

private System.Threading.ManualResetEvent busy;
        WebcrawlerConfiguration wccfg;
        List<string> currentCrawlingSite;
        List<string> sitesToCrawl;
        RetrieveWebContent retwebcontent;
        public event EventHandler<WebCrawlerProgressEventHandler> ProgressEvent;
        public bool done;
        public bool failed;
        public WebCrawler(WebcrawlerConfiguration webcralwercfg)
        {
            failed = false;
            done = false;
            currentCrawlingSite = new List<string>();
            sitesToCrawl = new List<string>();
            busy = new System.Threading.ManualResetEvent(true);
            wccfg = webcralwercfg; 
        }

为什么我在Throw上得到了空异常

您正在获得NullReferenceException,因为您在尝试块中使用它之前未能初始化某些内容。

代码然后进入catch(Exception ex)块,该块增加计数器,设置failed=true,然后重新抛出NullReferenceException

这个函数调用有问题:

TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", "")

调试器在throw语句上停止的原因是因为您捕获了原始异常,将其隐藏在调试器中。设置你的调试选项为"Break on First-Chance Exception"——然后你会看到异常真正来自哪里,能够检查你的变量,等等。

在调试期间,#if删除任何捕获所有异常处理程序通常是一个好主意,因为它们吞下了许多重要的错误信息。对于您正在做的事情,使用try/finally可能会更好。