Jump to content
UBot Underground

cujo56

Fellow UBotter
  • Content Count

    36
  • Joined

  • Last visited

  • Days Won

    1

cujo56 last won the day on February 2 2014

cujo56 had the most liked content!

Community Reputation

2 Neutral

About cujo56

  • Rank
    Advanced Member

Profile Information

  • Gender
    Male

System Specs

  • OS
    Windows 8
  • Total Memory
    8Gb
  • Framework
    v4.0
  • License
    Standard Edition

Recent Profile Visitors

2081 profile views
  1. OK I am back - I tried compiling the bot and running it that way. That failed even worse. So, I added a number of error handling routines along different places in the bot to try to capture the error and correct for possible errors. While I am able now to get up to 2000 iterations/pages it still locks up. Am I asking too much from Ubot Studio? Is it meant for bigger data collections? Don't take this wrong, I love the product and the support but I often feel like I am a beta tester and not a user.
  2. Having some success. I scraped 1000 pages last night. The bot was failing at about 300 loops. Still don't know why. But I added an inner loop to wait 3 minutes every 100 iterations. Slowing it down seem to help. I am testing it now on a larger count.
  3. itexspert - Good idea. I will have to give it some thought. The data I am scraping is generated dynamically on the page via button click and on page parameters so there isn't a url list to generate. It's doable I will just have to reset the parameters for each new browser... I will probably just get duplicate data. I will let you know.
  4. This was the error from the $document text. error converting value "<html.... (the rest the result $document text html follows) No $eval nodes. I am really just grabbing HTML and picking it apart with regular expressions, storing data in lists for writing out. If the logic were bad it would fail on the first few times (I would think) and not wait till a few hundred loops into the script. It's almost like the browser is crashing or something? Could the target site be blocking me?
  5. Thanks for the suggestion. I gave the $document text a try. Got similar errors. The first one is different and says error converting value "<html.... (the rest the result $document text html follows) So I thought I would make a loop that tested the validity of the $document text just to make sure it was grabbing the HTML. Basically I would wait 4 seconds to make sure the page loads, grab the document, wait another 2 seconds just to slow it down, test if it was holding html, if not try again. That failed too. These errors feel like a memory leak to me... the bugs are impacting my
  6. I have tried several things to address this problem in addition to the above. First, removing plugins that weren't needed in the bot. Second uninstalling recently added programs on my PC. I also disabled firewalls and security software. Finally, I uninstalled and reinstalled UbotStudio. I am still getting the same errors. Anyone else having these issues when scraping over 1000 pages? (mine will error out somewhere around 300 page +/-)
  7. The additional loop didn't make a difference. Same errors.
  8. Thanks for the suggestion and great support. I thought the loop would work but once again I get an error... http://i.imgur.com/UEfhNtL.jpg?1 Then this JSON error after hitting "Continue". http://i.imgur.com/b7qGD0Q.jpg I will try your $exists loop on the <id=genbtn"> html also... Hope that makes a difference.
  9. Looks like the problem still exists. I am back to square one on this. Any ideas as to what is triggering these errors?
  10. Ugh... OK, I moved just the working/running parts of my bot to a whole new file. (I had some other scripts/tabs in the original file that were still in development). Ran a test to loop 300 pages and it worked without error. I will test on a higher number next. I will post back here if there is still a problem. Otherwise, no news is good news. Thanks.
  11. Here is an additional error message I am getting after the JSON. Error converting value True to type 'System.Collections.Generic.List`1[system.String]'.Path ", line 1, position 4. Source: pagescraperbot -> -> set -> $scrape attribute (<class="info">, "innerhtml")-> $scrape attribute(<class="info">,innerhtml")->$scrape attribute
  12. Here is the Define it's happening in. I had this chuck of code nested before. I thought breaking it out might help me see the problem better. define Get Identity { set(#Identity,$scrape attribute(<class="info">,"innerhtml"),"Global") set(#Identity,$replace regular expression(#Identity,"\\t|\\r|\\n| ",$nothing),"Global") set(#Identity,$replace regular expression(#Identity,"(?<=>)\\s+(?=<)",$nothing),"Global") set(#Identity,$replace regular expression(#Identity,"(?<=>)\\s+(?=[a-zA-Z0-9\\:])",$nothing),"Global") set(#Identity,$replace regular expressio
  13. Ok, I am going to jump into this thread. I am having the same problem. I have tried increasing the wait time between page reloads in my scrapper (just make sure everything was fully loaded and account for sluggish internet connections). I get about 100 pages into a scrape and I get the same messages. I am working on a theory. Could Google Adsense on a site be a problem? After all, scripts like these could throw off their impression metrics (something they pride themselves on). Just not sure how to test the idea? How can we do ad blocking?
  14. ok... While I had UbotStudio set to Allow in Norton 360 I think the update program was being blocked. Disable Norton, restarted UbotStudio. It ran through the update process now works as normal.
  15. smartscreen is off, .NET 4.5.2. I did manage to get an error message during one attempt. "error checking files" or something like that. Still locked up.
×
×
  • Create New...