I'm currently scraping tweets based on certain keywords using r v. 1.0.44 and the package twitteR (newest version). Specifically I use the following command:
my_twitter_data <- searchTwitter("#aleppo", n = 40000, lang = "en", since = '2016-12-12', until = "2016-12-13", retryOnRateLimit = 120)
In a request for 40k tweets about #aleppo (which takes quite some time to get due to rate limitation) only 5k of the results will be original tweets, i.e. strip_retweets(my_twitter_data, strip_manual=TRUE, strip_mt=TRUE)
will return a list of length 5k.
My problem is that I spend a lot of my rate limit and therefore time on retweets which are irrelevant for my further analysis. My question is if there is a way around this problem in R so I only spend my rate limit on original tweets?
You can add
-filter:retweets
to your query:my_twitter_data <- searchTwitter("#aleppo exclude:retweets", n = 40000, lang = "en", since = '2016-12-12', until = "2016-12-13", retryOnRateLimit = 120)