Note: Scweet is not affiliated with Twitter/X. Use responsibly and lawfully.
For heavy-duty scraping, we recommend using Scweet on Apify β a cloud-based solution that offers:
- Zero setup: No need to install or maintain infrastructure.
- Incredible Speed: Up to 1000 tweets per minute.
- High Reliability: Managed and isolated runs for consistent performance.
- Free Usage Tier: Get started for free with a generous quotaβperfect for experiments, small projects, or learning how Scweet works. Once you exceed the free quota, you'll pay only $0.30 per 1,000 tweets.
Scweet has recently encountered challenges due to major changes on X (formerly Twitter). In response, weβre excited to announce the new Scweet v3 release!
- β Fully asynchronous architecture for faster, smoother scraping
- π§ No more manual Chromedriver setup β Scweet handles Chromium internally with Nodriver
- π Enhanced for personal and research-level scraping
- π§βπ€βπ§ Follower & following scraping is back! (see below π)
Scweet is a Python-based scraping tool designed to fetch tweets and user data without relying on traditional Twitter APIs, which have become increasingly restricted.
With Scweet, you can:
- Scrape tweets by keywords, hashtags, mentions, accounts, or timeframes
- Get detailed user profile information
- β Retrieve followers, following, and verified followers!
Scrape tweets between two dates using keywords, hashtags, mentions, or specific accounts.
β Available arguments include:
- since, until
- words
- from_account, to_account, mention_account
- hashtag, lang
- limit, display_type, resume
- filter_replies, proximity, geocode
- minlikes, minretweets, minreplies
- save_dir, custom_csv_name
Fetch profile details for a list of handles. Returns a dictionary with:
username
,verified_followers
following
,location
,website
,join_date
,description
π§© Arguments:
- handles # List of Twitter/X handles
- login (bool) # Required for complete data
Scweet now supports scraping followers and followings again!
β οΈ Important Note: This functionality relies on browser rendering and may trigger rate-limiting or account lockouts. Use with caution and always stay logged in during scraping.
π§© Example Usage:
handle = "x_born_to_die_x"
# Get followers
followers = scweet.get_followers(handle=handle, login=True, stay_logged_in=True, sleep=1)
# Get following
following = scweet.get_following(handle=handle, login=True, stay_logged_in=True, sleep=1)
# Get only verified followers
verified = scweet.get_verified_followers(handle=handle, login=True, stay_logged_in=True, sleep=1)
Customize Scweetβs behavior during setup:
scweet = Scweet(
proxy=None, # Dict or None
cookies=None, # Nodriver-based cookie handling
cookies_path='cookies', # Folder for saving/loading cookies
user_agent=None, # Optional custom user agent
disable_images=True, # Speeds up scraping
env_path='.env', # Path to your .env file
n_splits=-1, # Date range splitting
concurrency=5, # Number of concurrent tabs
headless=True, # Headless scraping
scroll_ratio=100 # Adjust for scroll depth/speed
)
Scweet requires login for tweets, user info, and followers/following.
Set up your .env
file like this:
EMAIL=your_email@example.com
EMAIL_PASSWORD=your_email_password
USERNAME=your_username
PASSWORD=your_password
Need a temp email? Use built-in MailTM integration:
from Scweet.utils import create_mailtm_email
email, password = create_mailtm_email()
pip install Scweet
Requires Python 3.7+ and a Chromium-based browser.
from Scweet.scweet import Scweet
from Scweet.utils import create_mailtm_email
scweet = Scweet(proxy=None, cookies=None, cookies_path='cookies',
user_agent=None, disable_images=True, env_path='.env',
n_splits=-1, concurrency=5, headless=False, scroll_ratio=100)
# Get followers (β οΈ requires login)
followers = scweet.get_followers(handle="x_born_to_die_x", login=True, stay_logged_in=True, sleep=1)
print(followers)
# Get user profile data
infos = scweet.get_user_information(handles=["x_born_to_die_x", "Nabila_Gl"], login=True)
print(infos)
# Scrape tweets
results = scweet.scrape(
since="2022-10-01",
until="2022-10-06",
words=["bitcoin", "ethereum"],
lang="en",
limit=20,
minlikes=10,
minretweets=10,
save_dir='outputs',
custom_csv_name='crypto.csv'
)
print(len(results))
tweetId | UserScreenName | Text | Likes | Retweets | Timestamp |
---|---|---|---|---|---|
... | @elonmusk | ... | 18787 | 1000 | 2022-10-05T17:44:46.000Z |
Full CSV output includes user info, tweet text, stats, embedded replies, media, and more.
Need powerful, scalable, high-volume scraping?
Try Scweet on Apify:
- π Up to 1000 tweets/minute
- π¦ Export to datasets
- π Secure, isolated browser instances
- π Ideal for automation & research projects
We care deeply about ethical scraping.
Please: Use Scweet for research, education, and lawful purposes only. Respect platform terms and user privacy.
- π Example Script
- π Issues / Bugs
- π Scweet on Apify
If you find Scweet useful, consider starring the repo β
We welcome PRs, bug reports, and feature suggestions!
MIT License β’ Β© 2020β2025 Altimis