Jump to content
YOUR-AD-HERE
HOSTING
TOOLS

Search the Community

Showing results for tags 'redditors' or ''.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Staff Control
    • Staff Announcements
  • General doubts | News
    • General doubts
    • News
  • Hacking | Remote Administration | Bugs & Exploits
    • Hacking
    • Remote Administration
    • Bugs & Exploits
  • Programming | Web | SEO | Prefabricated applications
    • General Programming
    • Web Programming
    • Prefabricated Applications
    • SEO
  • Pentesting Zone
    • Pentesting Accounts
    • Reverse Engineering
  • Security & Anonymity
    • Security
    • Wireless Security
    • Web Security
    • Anonymity
  • Operating Systems | Hardware | Programs
    • Operating systems
    • Hardware
    • PC programs
    • iOS
    • Android
  • Graphic Design
    • Graphic Design
  • vBCms Comments
  • live stream tv
    • live stream tv
  • Marketplace
    • Sell
    • Services
    • Request
  • Pentesting Premium
    • Pentesting Accounts
  • Modders Section
    • Source Codes
    • Manuals | Videos
    • Tools
    • Others
  • PRIV8-Section
    • Exploits
    • Accounts|Dumps
    • Crypter|Binder|Bots
    • Tutorials|Videos
    • Cracked Tools
    • Make Money
    • More Tools
    • Databeses
    • Ebooks
  • Pentesting Zone PRIV8
    • Pentesting Accounts
    • Reverse Engineering
    • Cracker Preview Area
  • Carding Zone PRIV8
    • Carding
    • Phishing
    • Defacing
    • Doxing
    • Special User Premium Preview Area
  • Recycle Bin
    • Recycle
  • Null3D's Nulled Group

Product Groups

  • PRIV8
  • Advertising
  • Access Basic
  • Seller
  • Services

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 7 results

  1. This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. Changelog v3.3.2 Added Source code In Redditor.py: Added a new method GetInteractions._get_user_subreddit() – extracts subreddit data from the UserSubreddit object into a dictionary. Tests In test_Redditor.py: Added TestGetUserSubredditMethod().test_get_user_subreddit() to test the new method. Changed Source code In Redditor.py: GetInteractions._get_user_info() calls the new GetInteractions._get_user_subreddit() method to set the Redditor’s subreddit data within the main Redditor information dictionary. In Version.py: Incremented version number. README Incremented PRAW badge version number. [hide][Hidden Content]]
  2. Universal Reddit Scraper This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. [hide][Hidden Content]]
  3. This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. Summary Introduced livestreaming tools: Livestream comments or submissions submitted within Subreddits. Livestream comments or submissions submitted by a Redditor. [hide][Hidden Content]]
  4. Universal Reddit Scraper This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. Changelog v3.2.1 Structured comments export has been upgraded to include comments of all levels. Structured comments are now the default export format. Exporting to raw format requires including the --raw flag. Tons of metadata has been added to all scrapers. See the Full Changelog section for a full list of attributes that have been added. Credentials.py has been deprecated in favor of .env to avoid hard-coding API credentials. Added more terminal eye candy – Halo has been implemented to spice up the output. [hide][Hidden Content]]
  5. Universal Reddit Scraper This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. [hide][Hidden Content]]
  6. Universal Reddit Scraper This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. Changelog v3.1.2 New in 3.1.2 URS will create sub-directories within the date directory based on the scraper. Exported files will now be stored in the subreddits, redditors, or comments directories. These directories are only created if the scraper is ran. For example, the redditors directory will not be created if you never run the Redditor scraper. Removed the first character used in exported filenames to distinguish scrape type in previous iterations of URS. This is no longer necessary due to the new sub-directory creation. The forbidden access message that may appear when running the Redditor scraper was originally red. Changed the color from red to yellow to avoid confusion. Fixed a filenaming bug that would omit the scrape type if the filename length is greater than 50 characters. Updated README Updated demo GIFs Added new directory structure visual generated by the tree command. Created new section headers to improve navigation. Minor code reformatting/refactoring. Updated STYLE_GUIDE to reflect new changes and made a minor change to the PRAW API walkthrough. [hide][Hidden Content]]
  7. Universal Reddit Scraper This is a universal Reddit scraper that can scrape Subreddits, Redditors, and comments on posts. Scrape speeds will be determined by the speed of your internet connection. Changelog v3.1 New in 3.1.0: Scrapes will now be exported to the scrapes/ directory within a subdirectory corresponding to the date of the scrape. These directories are automatically created for you when you run URS. Added log decorators that record what is happening during each scrape, which scrapes were ran, and any errors that might arise during runtime in the log file scrapes.log. The log is stored in the same subdirectory corresponding to the date of the scrape. Replaced bulky titles with minimalist titles for a cleaner look. Added color to terminal output. Improved naming convention for scripts. Integrating Travis CI and Codecov. Updated community documents located in the .github/ directory: BUG_REPORT, CONTRIBUTING, FEATURE_REQUEST, PULL_REQUEST_TEMPLATE, and STYLE_GUIDE Numerous changes to README. The most significant change was splitting and storing walkthroughs in docs/. [HIDE][Hidden Content]]
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.