<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>r | B101nfo</title>
    <link>https://llrs.dev/categories/r/</link>
      <atom:link href="https://llrs.dev/categories/r/index.xml" rel="self" type="application/rss+xml" />
    <description>r</description>
    <generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>If it is code you can copy and reuse (MIT) if it is text, please cite and reuse CC-BY 2024.</copyright><lastBuildDate>Thu, 29 Feb 2024 19:00:00 +0200</lastBuildDate>
    
    
    <item>
      <title>useR madrid: rtweet</title>
      <link>https://llrs.dev/talk/user-madrid-rtweet/</link>
      <pubDate>Thu, 29 Feb 2024 19:00:00 +0200</pubDate>
      <guid>https://llrs.dev/talk/user-madrid-rtweet/</guid>
      <description>


&lt;p&gt;This presentation was in Spanish. I shared the history of my involvement with rtweet and what is happening with the package and Twitter API.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>New rtweet release: 2.0.0</title>
      <link>https://llrs.dev/post/2024/02/16/new-rtweet-release-2-0-0/</link>
      <pubDate>Fri, 16 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2024/02/16/new-rtweet-release-2-0-0/</guid>
      <description>


&lt;p&gt;This is a brief announcement of rtweet version 2.0.0.
This major version changes signals the move from the API v1.1 to the API v2.&lt;/p&gt;
&lt;p&gt;There haven’t been many changes since 1.2.1 but this is to signal that the API v1.1 is deprecated.&lt;/p&gt;
&lt;p&gt;The previous release was a bit of a rush to meet with the requirements of CRAN maintainers to fix an error and it wasn’t polished.
Some users complained that it was difficult to find what worked.
In this release I focused mostly to make life easier for users:&lt;/p&gt;
&lt;p&gt;Now there is a document the deprecated functions from API v1.1 to API v2: see &lt;code&gt;help(&#34;rtweet-deprecated&#34;, &#34;rtweet&#34;)&lt;/code&gt;.
I also made it easier for the rtweet to work with API v2: the release of httr2 1.0.0 version helped to avoid some workarounds with the authentication process.&lt;/p&gt;
&lt;p&gt;I also focused on updating the vignettes to the most up to date recommendations.
I am not sure the streaming vignettes is up to date (but keep reading why I left it as is).&lt;/p&gt;
&lt;p&gt;Last, following CRAN policy: if users create rtweet data they can now delete it with &lt;code&gt;client_clean()&lt;/code&gt; and &lt;code&gt;auth_clean()&lt;/code&gt;.&lt;/p&gt;
&lt;div id=&#34;future-releases&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Future releases&lt;/h1&gt;
&lt;p&gt;For the last year I &lt;a href=&#34;https://github.com/ropensci/rtweet/issues/763&#34;&gt;asked the community&lt;/a&gt; for a co-maintainer with interest in the package.
Unfortunately, people that showed some interest at the end didn’t commit to it.&lt;/p&gt;
&lt;p&gt;At the same time I &lt;a href=&#34;https://llrs.dev/post/2023/02/16/rtweet-future/&#34;&gt;also asked&lt;/a&gt; for &lt;a href=&#34;https://www.buymeacoffee.com/llrs&#34;&gt;donations&lt;/a&gt; to support an API access.
It currently costs 100€ to access most endpoints which is needed to test and develop the package.
However, this is more than half of what I spend in groceries last month.&lt;br /&gt;
Other packages like &lt;a href=&#34;https://cran.r-project.org/package=academictwitteR&#34;&gt;academictwitteR&lt;/a&gt; are also stopping development/support.
Although not archived from CRAN, it has a note in the README:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note this repo is now ARCHVIED due to changes to the Twitter API. The paid API means open-source development of this package is no longer feasible.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Similarly without financial help and community interest I won’t invest more time on it.&lt;br /&gt;
This is the last version that I release.
I have other interests and I would like to focus on other projects.
My focus will be on updating and releasing some packages I have.
I also want to focus more on my own company to help the R community (and beyond).
I will write about the company shortly.&lt;/p&gt;
&lt;p&gt;There have been some discussions on social media how to signal deprecation of packages.
The only method available on CRAN that I know is to declare a package ORPHANATED.
I have requested to CRAN to declared the package ORPHANATED.&lt;/p&gt;
&lt;div id=&#34;reproducibility&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Reproducibility&lt;/h3&gt;
&lt;details&gt;
&lt;pre&gt;&lt;code&gt;## ─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────
##  setting  value
##  version  R version 4.3.1 (2023-06-16)
##  os       Ubuntu 22.04.4 LTS
##  system   x86_64, linux-gnu
##  ui       X11
##  language en
##  collate  en_US.UTF-8
##  ctype    en_US.UTF-8
##  tz       Europe/Madrid
##  date     2024-02-24
##  pandoc   3.1.1 @ /usr/lib/rstudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)
## 
## ─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
##  package     * version date (UTC) lib source
##  blogdown      1.18    2023-06-19 [1] CRAN (R 4.3.1)
##  bookdown      0.37    2023-12-01 [1] CRAN (R 4.3.1)
##  bslib         0.6.1   2023-11-28 [1] CRAN (R 4.3.1)
##  cachem        1.0.8   2023-05-01 [1] CRAN (R 4.3.1)
##  cli           3.6.2   2023-12-11 [1] CRAN (R 4.3.1)
##  digest        0.6.34  2024-01-11 [1] CRAN (R 4.3.1)
##  evaluate      0.23    2023-11-01 [1] CRAN (R 4.3.2)
##  fastmap       1.1.1   2023-02-24 [1] CRAN (R 4.3.1)
##  htmltools     0.5.7   2023-11-03 [1] CRAN (R 4.3.2)
##  jquerylib     0.1.4   2021-04-26 [1] CRAN (R 4.3.1)
##  jsonlite      1.8.8   2023-12-04 [1] CRAN (R 4.3.1)
##  knitr         1.45    2023-10-30 [1] CRAN (R 4.3.2)
##  lifecycle     1.0.4   2023-11-07 [1] CRAN (R 4.3.2)
##  R6            2.5.1   2021-08-19 [1] CRAN (R 4.3.1)
##  rlang         1.1.3   2024-01-10 [1] CRAN (R 4.3.1)
##  rmarkdown     2.25    2023-09-18 [1] CRAN (R 4.3.1)
##  rstudioapi    0.15.0  2023-07-07 [1] CRAN (R 4.3.1)
##  sass          0.4.8   2023-12-06 [1] CRAN (R 4.3.1)
##  sessioninfo   1.2.2   2021-12-06 [1] CRAN (R 4.3.1)
##  xfun          0.42    2024-02-08 [1] CRAN (R 4.3.1)
##  yaml          2.3.8   2023-12-11 [1] CRAN (R 4.3.1)
## 
##  [1] /home/lluis/bin/R/4.3.1
##  [2] /opt/R/4.3.1/lib/R/library
## 
## ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────&lt;/code&gt;&lt;/pre&gt;
&lt;/details&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Releasing rtweet 1.2.0</title>
      <link>https://llrs.dev/post/2023/03/20/rtweet-starts-using-api-v2/</link>
      <pubDate>Mon, 20 Mar 2023 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2023/03/20/rtweet-starts-using-api-v2/</guid>
      <description>


&lt;p&gt;I’m very excited to announce that rtweet 1.2.0 is now available on GitHub! Install it by running:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;devtools::install_github(&amp;quot;ropensci/rtweet&amp;quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then load it in a fresh session with:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(rtweet)&lt;/code&gt;&lt;/pre&gt;
&lt;div id=&#34;new-features&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;New features&lt;/h1&gt;
&lt;p&gt;This version adds many new endpoints to retrieve data from twitter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;From lists&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From tweets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;About users&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, about statistics of your own content.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can read about them in the &lt;a href=&#34;https://docs.ropensci.org/rtweet/news/index.html&#34;&gt;NEWS&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;authentication&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Authentication&lt;/h1&gt;
&lt;p&gt;Besides fixing a problem preventing new users to use &lt;code&gt;auth_setup_default()&lt;/code&gt;, in this release there is a new authentication mechanism.&lt;/p&gt;
&lt;p&gt;Some endpoints require a new authentication method not previously used by rtweet.
This authentication mechanism requires setting up a client.&lt;br /&gt;
To support it, I have added some functions to create it, save it, and use it modelling the functions from &lt;code&gt;auth_*&lt;/code&gt;.
There is now one client provided by rtweet if you don’t want to configure your own:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;client_setup_default()&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Additionally, I briefly expanded the authentication vignette (&lt;code&gt;vignette(&#34;auth&#34;, &#34;rtweet&#34;)&lt;/code&gt;) to include a section about how to obtain the required credentials.
Once you get them is pretty straight forward:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;auth_oauth2 &amp;lt;- rtweet_oauth2(app = &amp;quot;my_awesome_app&amp;quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This mechanism is required by some functions which are of special interest: &lt;code&gt;user_self()&lt;/code&gt;, &lt;code&gt;tweet_bookmarked()&lt;/code&gt;, &lt;code&gt;user_blocked()&lt;/code&gt;, and &lt;code&gt;user_timeline()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that due to upstream reasons, the authentication is only valid for 2 hours.
You will be asked to approve the client again after the 2 hours (and save it again!).&lt;/p&gt;
&lt;p&gt;We can set the authentication as we usually do:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;auth_as(auth_oauth2)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And start retrieving our data in Twitter!&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;new-functions&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;New functions&lt;/h1&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;me &amp;lt;- user_self()
bookmarked &amp;lt;- user_bookmarks(me$id, n = 120)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;rtweet will make as many requests as needed and automatically paginate the results.
However, if you try this you might realize that the queries are slow.
These are the limits imposed by Twitter.&lt;/p&gt;
&lt;p&gt;If you want to keep track of the progress of your query, you can use &lt;code&gt;verbose = TRUE&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;blocked &amp;lt;- user_blocked(me$id, n = Inf, verbose = TRUE)
timeline &amp;lt;- user_timeline(me$id, n = 800, verbose = TRUE)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It will also store the data of the requests in a temporary file, in case you lose the connection you can still recover it.&lt;/p&gt;
&lt;p&gt;Some endpoints have a length limit on the accepted input:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;bioconductor &amp;lt;- user_by_username(&amp;quot;Bioconductor&amp;quot;)
bioconductor_followers &amp;lt;- user_followers(bioconductor$id, n = 200)
us &amp;lt;- user_search(ids = bioconductor_followers$id)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Errors are, in principle, easier to understand in these new functions, thanks to the messages provided:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;user_blocked(bioconductor$id, n = Inf, verbose = TRUE)&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div id=&#34;other&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Other&lt;/h1&gt;
&lt;p&gt;These new endpoints provide access to many data which only the default information is converted to a nice table.
If you request more data, via expansions and fields: replies, information about the user of a tweet, … you will have to wait next release.&lt;br /&gt;
You can already get them but with &lt;code&gt;parse = FALSE&lt;/code&gt;.
My intention was to provide more parsing support in this release, but I think it is better to make more releases more often.&lt;/p&gt;
&lt;p&gt;The API also provides an endpoint to check if data stored is compliant with the Terms of Service.
I started working on these endpoints after the streaming endpoints because they are important.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;side-story&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Side story&lt;/h1&gt;
&lt;p&gt;With the deprecation of the streaming endpoints and the function &lt;code&gt;stream_tweets&lt;/code&gt; I implemented the first three functions using Twitter’s API v2.
They use a bearer token as authentication mechanism.&lt;/p&gt;
&lt;p&gt;Many endpoints of API v2 also use this authentication mechanism and made it easy to support them.&lt;/p&gt;
&lt;p&gt;But there was a petition to retrieve the bookmarked tweets.
That endpoint required OAuth2 mechanism and didn’t allow the use of the bearer token.
It is relevant because bookmarks are not provided with the data dump you can request from Twitter.
This endpoint is the only automatic way to retrieve them from Twitter if you used them!&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;final&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Final&lt;/h1&gt;
&lt;p&gt;This update is the last update of rtweet.
The &lt;a href=&#34;https://twitter.com/TwitterDev/status/1641222782594990080&#34;&gt;new API plans&lt;/a&gt; make it impossible to continue developing and testing software like rtweet without substantial financial investment (at least USD$100/month).&lt;/p&gt;
&lt;p&gt;More importantly, this will restrict who can use the package.
I think those few users still using rtweet might also afford to pay for support or development of new features.
If you are one of them you can &lt;a href=&#34;https://www.buymeacoffee.com/llrs&#34;&gt;sponsor my work&lt;/a&gt; in rtweet.&lt;/p&gt;
&lt;p&gt;I will remove the package from CRAN one month after the new API enters into effect (~ 1st July).
Farewell Twitter.&lt;/p&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Accessing REDCap from R</title>
      <link>https://llrs.dev/post/2023/02/08/accessing-redcap-from-r/</link>
      <pubDate>Wed, 08 Feb 2023 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2023/02/08/accessing-redcap-from-r/</guid>
      <description>


&lt;p&gt;In this post, I want to summarize some of the packages to connect to &lt;a href=&#34;https://www.project-redcap.org/&#34;&gt;REDCap&lt;/a&gt;.
For those who don’t know, REDCap is a database designed for clinical usage, which allows easy data collection of patients’ responses by clinicians and interactions with the patients via surveys.&lt;/p&gt;
&lt;p&gt;It has specific features such as scheduling surveys sent to patients, compatibility with tablets and mobile phones for data entry while visiting patients, grouping data in instruments (for repeating the same questions multiple times), multiple choice and check buttons, and different arms (like paths for patients).
Most importantly is relatively easy to manage by clinical administrators.&lt;/p&gt;
&lt;p&gt;In CRAN there are ~11 &lt;a href=&#34;https://search.r-project.org/?P=REDCap&amp;amp;SORT=&amp;amp;HITSPERPAGE=10&amp;amp;DB=cran-info&amp;amp;DEFAULTOP=and&amp;amp;FMT=query&amp;amp;xDB=all&amp;amp;xFILTERS=.%7E%7E&#34;&gt;packages mentioning it&lt;/a&gt; at the time of writing it.
The purpose of this post is to help decide which packages can be helpful in which situations.
This post won’t be a deep analysis or comparison of capabilities, it describes some of the best and worse features of each package.&lt;/p&gt;
&lt;div id=&#34;redcapr&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;REDCapR&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=REDCapR&#34;&gt;REDCapR&lt;/a&gt; is the official package to connect to the database.
It allows you to read, write and filter the requests.
It has some security-related functions.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;redcaptidier&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;REDCapTidieR&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=REDCapTidieR&#34;&gt;REDCapTidieR&lt;/a&gt; is a package that provides summaries of tables and helps with nested tibbles data by arm.
It depends on REDCapR.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;tidyredcap&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;tidyREDCap&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=tidyREDCap&#34;&gt;tidyREDCap&lt;/a&gt; is a package that simplifies the tables for instruments and choose-all or choose-one question types.
It is easy to make tables and it depends on REDCapR.
It requires the first and last columns to make instruments.&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;
&lt;img src=&#34;images/redcap_design.jpg&#34; alt=&#34;&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;Screenshot of a design with several instruments in a single arm (from &lt;a href=&#34;https://www.project-redcap.org/&#34; class=&#34;uri&#34;&gt;https://www.project-redcap.org/&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;redcapexporter&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;REDCapExporter&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=REDCapExporter&#34;&gt;REDCapExporter&lt;/a&gt; is a package to build a data package from a database for redistribution.
It does not depend on REDCapR.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;redcapapi&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;redcapAPI&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=redcapAPI&#34;&gt;redcapAPI&lt;/a&gt; is a package for making data accessible and analysis-ready as quickly as possible with huge documentation in a wiki but has no vignette or examples and it does not depend on REDCapR.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;redcapdm&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;REDCapDM&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=REDCapDM&#34;&gt;REDCapDM&lt;/a&gt; is a package that provides functions to read and manage REDCap data and identify missing or extreme values as well as transform the data provided by the API.
It depends on REDCapR.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;reviewr&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;ReviewR&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=ReviewR&#34;&gt;ReviewR&lt;/a&gt; is a package that creates a shiny website with data from the database to explore it.
It uses the REDCapR to connect to your instance.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;rccola&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;rccola&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cran.r-project.org/package=rccola&#34;&gt;rccola&lt;/a&gt; is a package to provide a secure connection to the database but it doesn’t provide any handling of the data.
It uses redcapAPI to connect to the database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://llrs.dev/post/2023/02/08/accessing-redcap-from-r/index.en_files/figure-html/unnamed-chunk-1-1.png&#34; alt=&#34;Barplot with the dependencies: from less to more: REDCapExporter, rccola, redcapAPI, REDCapR, tidyREDCap, REDCapDM, REDCapTidieR, ReviewR&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;other-packages&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Other packages&lt;/h2&gt;
&lt;p&gt;Other packages mention REDCap:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://cran.r-project.org/package=nmadb&#34;&gt;nmadb&lt;/a&gt;: which implements its own connection procedure for a specific REDCap database of network meta-analyses.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cran.r-project.org/package=distcomp&#34;&gt;distcomp&lt;/a&gt;: Allows to do computation on a distributed data also in REDCap.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cran.r-project.org/package=cgmanalysis&#34;&gt;cgmanalysis&lt;/a&gt;: which mentions that data produced is compatible with REDCap.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id=&#34;conclusion&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;I’m sure that many packages briefly described here can do much more than what I understood from a glance at their documentation and DESCRIPTION.&lt;/p&gt;
&lt;p&gt;Most packages provide some data for the examples (and probably tests), while others do not.
This is a technical problem that might impact users if there are no examples in the functions.&lt;/p&gt;
&lt;p&gt;REDCapR is used by most packages to access the database, but most of the packages focus on transforming the data provided by the API (or data exported) or the exported data.
It highlights that the data exported is useful but that depending on the preferences of the users it needs to be transformed for easy usage.&lt;/p&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Los paquetes van a CRAN</title>
      <link>https://llrs.dev/talk/los-paquetes-van-a-cran/</link>
      <pubDate>Fri, 25 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/talk/los-paquetes-van-a-cran/</guid>
      <description>


&lt;p&gt;The “XII Jornadas de R y I congreso de R Hispano” conference is the meeting point of useRs in Spain with many people around the country and with different background from mathematicians, engineers, forest engineering.
All the talks were in Spanish, although some of the contents were in English.
I submitted this abstract in Spanish:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Por lo general, compartir nuestro trabajo con la comunidad R significa enviar un paquete a
un archivo (CRAN, Bioconductor u otros). CRAN es el mayor repositorio de paquetes de R que viene aceptado por
defecto por R. Pero, ¿Qué hay que hacer para escribir un paquete, que CRAN lo acepte y se mantenga
en CRAN?&lt;br /&gt;
Que un paquete se mantenga en CRAN depende de la calidad del paquete. Esto se debe a que hay que
pasar un proceso de revisión. Si el buen paquete sigue las reglas y tiene una calidad de acuerdo con
sus criterios, se durará. Primero, hay un chequeo inicial automático; segundo, una revisión manual
más profunda del código. Luego, si las sugerencias se aplican o se responden correctamente, el
paquete se incluye en el archivo.&lt;br /&gt;
En cada paso se utilizan algunas reglas y criterios para decidir si el paquete avanza o no. Comprender
lo que dicen estas reglas, los problemas comunes y los comentarios de los revisores ayudarán a evitar
enviar un paquete para que sea rechazado. Reducir la fricción entre compartir nuestro trabajo,
proporcionar paquetes útiles a la comunidad y minimizar el tiempo y los esfuerzos de los revisores.&lt;br /&gt;
A partir de los datos históricos veremos el proceso habitual, el tiempo de espera hasta su inclusión,
el número de revisiones habituales antes de ser aceptados y el porcentaje de éxito. También haremos
un recorrido histórico de los paquetes de CRAN: tiempo de duración de una versión en CRAN, relación
entre versiones y dependencias y el número de paquetes nuevos habituales. Para ver qué
características tienen que cumplir nuestro paquete para ser incluido y que otros usuarios pueden usar
nuestro código con garantías de calidad.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It was accepted as a flash presentation of just 5 minutes in a parallel session focused in programming and teaching R.
The room was full and people showed their interest before and after the talk, specifically how easy would it be to keep the package in CRAN or Bioconductor.
If you have other questions let me know!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Exploring CRAN&#39;s files: part 2</title>
      <link>https://llrs.dev/post/2022/07/28/cran-files-2/</link>
      <pubDate>Thu, 28 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2022/07/28/cran-files-2/</guid>
      <description>


&lt;div id=&#34;introduction&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In the &lt;a href=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/&#34;&gt;first post&lt;/a&gt; of the series we briefly explored packages available on CRAN.
Now I’ll focus on history of the packages and its size using the following files:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;packages &amp;lt;- tools::CRAN_package_db()
current &amp;lt;- tools:::CRAN_current_db()
archive &amp;lt;- tools:::CRAN_archive_db()&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this part we will use two files: The &lt;code&gt;current&lt;/code&gt; and the &lt;code&gt;archive&lt;/code&gt;, let’s see why.&lt;/p&gt;
&lt;div id=&#34;current-file&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;current file&lt;/h3&gt;
&lt;p&gt;The current database has has the package size, dates of modification, which I assume is date added to CRAN and user name of who last modified it.
This is the same information returned by &lt;a href=&#34;https://search.r-project.org/R/refmans/base/html/file.info.html&#34;&gt;&lt;code&gt;file.info&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;current[1, 1:10]
##     size isdir mode               mtime               ctime               atime
## A3 42810 FALSE  664 2015-08-16 23:05:54 2022-09-03 12:02:27 2022-09-03 14:00:19
##     uid  gid  uname    grname
## A3 1001 1001 hornik cranadmin&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div id=&#34;archive-file&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;archive file&lt;/h3&gt;
&lt;p&gt;The archive database returns the same information, but as you might guess by the name it doesn’t provide information about current packages but for packages in the archive and no longer available by default.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;archive[[1]]
##                     size isdir mode               mtime               ctime
## A3/A3_0.9.1.tar.gz 45252 FALSE  664 2013-02-07 10:00:29 2022-08-22 18:14:53
## A3/A3_0.9.2.tar.gz 45907 FALSE  664 2013-03-26 19:58:40 2022-08-22 18:14:53
##                                  atime  uid  gid  uname    grname
## A3/A3_0.9.1.tar.gz 2022-08-22 17:39:50 1001 1001 hornik cranadmin
## A3/A3_0.9.2.tar.gz 2022-08-22 17:39:50 1010 1001 ligges cranadmin&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The date matches that available on the &lt;a href=&#34;https://cran.r-project.org/src/contrib/Archive/A3/&#34;&gt;web’s old sources&lt;/a&gt;, so we can be confident of it’s meaning.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;cran-history&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;CRAN history&lt;/h2&gt;
&lt;p&gt;As we have seen there are some files about the archives of CRAN.
These include information about date of modification (moving/editing) and user who did it and of course name and sometimes version of the package.
These archives are the great treasure of CRAN because they help to make reproducible long time ago run experiments or analysis.&lt;/p&gt;
&lt;p&gt;Note that I’m not totally sure that this archive contains the full record of packages, some initial packages might be missing.
I’m also aware of some packages removed by CRAN which do not longer appear on this records.&lt;/p&gt;
&lt;p&gt;Nevertheless, this should provide an accurate picture of packages available through time.
Also as there is no information when a package is archived (here, &lt;a href=&#34;https://llrs.dev/post/2021/12/07/reasons-cran-archivals/&#34;&gt;there is on PACKAGES.in&lt;/a&gt;) so I might overestimate the packages available at any given moment.&lt;/p&gt;
&lt;p&gt;Remember the plot about &lt;a href=&#34;#accepted&#34;&gt;acceptance of packages on CRAN?&lt;/a&gt;
That plot only looked at current packages available, let’s check it with all the archive:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:accumulative-packages&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/accumulative-packages-1.png&#34; alt=&#34;*Packages on CRAN archive by their addition to it.* There are over 125000 archives on CRAN.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 1: &lt;em&gt;Packages on CRAN archive by their addition to it.&lt;/em&gt; There are over 125000 archives on CRAN.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;All these packages come from packages with few releases and packages with many releases.
If we look at which packages had the most releases:&lt;/p&gt;
&lt;template id=&#34;41fb6fac-ce02-4889-ac51-217e365f4058&#34;&gt;&lt;style&gt;
.tabwid table{
  border-spacing:0px !important;
  border-collapse:collapse;
  line-height:1;
  margin-left:auto;
  margin-right:auto;
  border-width: 0;
  display: table;
  margin-top: 1.275em;
  margin-bottom: 1.275em;
  border-color: transparent;
}
.tabwid_left table{
  margin-left:0;
}
.tabwid_right table{
  margin-right:0;
}
.tabwid td {
    padding: 0;
}
.tabwid a {
  text-decoration: none;
}
.tabwid thead {
    background-color: transparent;
}
.tabwid tfoot {
    background-color: transparent;
}
.tabwid table tr {
background-color: transparent;
}
.katex-display {
    margin: 0 0 !important;
}
&lt;/style&gt;&lt;div class=&#34;tabwid&#34;&gt;&lt;style&gt;.cl-e305f260{}.cl-e2fc13c6{font-family:&#39;DejaVu Sans&#39;;font-size:11pt;font-weight:normal;font-style:normal;text-decoration:none;color:rgba(0, 0, 0, 1.00);background-color:transparent;}.cl-e2fc2fdc{margin:0;text-align:left;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);padding-bottom:5pt;padding-top:5pt;padding-left:5pt;padding-right:5pt;line-height: 1;background-color:transparent;}.cl-e2fc2fe6{margin:0;text-align:right;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);padding-bottom:5pt;padding-top:5pt;padding-left:5pt;padding-right:5pt;line-height: 1;background-color:transparent;}.cl-e2fc7a46{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a5a{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a64{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a6e{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a6f{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a82{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a8c{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a96{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7a97{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7aa0{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7aa1{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7aaa{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7aab{width:100.6pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 2pt solid rgba(102, 102, 102, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-e2fc7ab4{width:69.7pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 2pt solid rgba(102, 102, 102, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}&lt;/style&gt;&lt;table class=&#39;cl-e305f260&#39;&gt;
&lt;thead&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7aab&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;package&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7ab4&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;Releases&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a5a&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;spatstat&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a46&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;206&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a97&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;Matrix&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aa0&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;204&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a6f&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;mgcv&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a82&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;162&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;RcppArmadillo&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;150&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;rgdal&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;146&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a97&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;nlme&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aa0&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;143&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a8c&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;caret&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a96&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;139&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;spdep&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;139&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a97&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;lattice&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aa0&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;137&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;plotrix&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;131&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a6f&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;sp&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a82&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;128&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a8c&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;XML&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a96&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;126&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a97&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;Rcmdr&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aa0&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;123&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a97&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;lme4&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aa0&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;122&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a5a&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;gstat&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a46&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;121&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a8c&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;arm&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a96&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;119&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;foreign&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;117&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a5a&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;party&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a46&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;117&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7a64&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;maptools&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7a6e&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;113&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-e2fc7aa1&#34;&gt;&lt;p class=&#34;cl-e2fc2fdc&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;raster&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-e2fc7aaa&#34;&gt;&lt;p class=&#34;cl-e2fc2fe6&#34;&gt;&lt;span class=&#34;cl-e2fc13c6&#34;&gt;108&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;&lt;/template&gt;
&lt;div class=&#34;flextable-shadow-host&#34; id=&#34;c207439a-5643-4e95-950e-721182ef54dd&#34;&gt;&lt;/div&gt;
&lt;script&gt;
var dest = document.getElementById(&#34;c207439a-5643-4e95-950e-721182ef54dd&#34;);
var template = document.getElementById(&#34;41fb6fac-ce02-4889-ac51-217e365f4058&#34;);
var caption = template.content.querySelector(&#34;caption&#34;);
if(caption) {
  caption.style.cssText = &#34;display:block;text-align:center;&#34;;
  var newcapt = document.createElement(&#34;p&#34;);
  newcapt.appendChild(caption)
  dest.parentNode.insertBefore(newcapt, dest.previousSibling);
}
var fantome = dest.attachShadow({mode: &#39;open&#39;});
var templateContent = template.content;
fantome.appendChild(templateContent);
&lt;/script&gt;

&lt;p&gt;Surprisingly there are packages with more than 200 versions on CRAN!&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:release-distribution&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/release-distribution-1.png&#34; alt=&#34;*Releases distirbution*. Packages and number of releases&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 2: &lt;em&gt;Releases distirbution&lt;/em&gt;. Packages and number of releases
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Most packages have 1 release, usually packages have 3, but the mean is around 6.&lt;/p&gt;
&lt;p&gt;Given all this different versions of packages how big are all the packages on CRAN?&lt;/p&gt;
&lt;div id=&#34;cran-size&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;CRAN size&lt;/h3&gt;
&lt;p&gt;Have you ever wondered how big is CRAN? According to the memory size of the source packages all CRAN source packages are approximately 96.8 Gb.&lt;/p&gt;
&lt;p&gt;This doesn’t include binaries for multiple architectures and OS.
The package size might indicate whether the package has considerable amount of data.&lt;/p&gt;
&lt;p&gt;Looking back to the size of the packages along time we can see this pattern:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:packages-size&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/packages-size-1.png&#34; alt=&#34;*Package and their median size.* Archived packages have become bigger since 2014. Packages on CRAN have been getting bigger since 2017.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 3: &lt;em&gt;Package and their median size.&lt;/em&gt; Archived packages have become bigger since 2014. Packages on CRAN have been getting bigger since 2017.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Packages available on CRAN are smaller than those no longer on CRAN.
But versions of packages on CRAN that got archived are usually bigger than current versions.
Packages no longer on CRAN are usually bigger.
Median size of packages is increasing (quickly).&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:release-size&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/release-size-1.png&#34; alt=&#34;*Size of package with releases.* Package are usually small but seem to gain weight when updating.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 4: &lt;em&gt;Size of package with releases.&lt;/em&gt; Package are usually small but seem to gain weight when updating.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Typically packages increase their size with each new release up to when they reach 50 releases.
For higher releases this plot depends on very few packages and might not be representative.&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:release-size2&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/release-size2-1.png&#34; alt=&#34;*Size of package with releases by availability.* Packages no longer in CRAN are usually smaller than those in it. The continous black line is CRAN&#39;s current threshold, while the discontinous black line is current median size.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 5: &lt;em&gt;Size of package with releases by availability.&lt;/em&gt; Packages no longer in CRAN are usually smaller than those in it. The continous black line is CRAN’s current threshold, while the discontinous black line is current median size.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Here we can appreciate better how packages tend to be below the CRAN threshold.
There isn’t much of a difference between packages available on CRAN and those archived.&lt;/p&gt;
&lt;p&gt;If we look at the size of package of the first release over time we’ll see a representative view:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:size-time&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/28/cran-files-2/index.en_files/figure-html/size-time-1.png&#34; alt=&#34;*Size of the first release by time*. Package size increases with time with a peak around 2010 and increasing again since 2014 but still hasn&#39;t surprased the previous record.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 6: &lt;em&gt;Size of the first release by time&lt;/em&gt;. Package size increases with time with a peak around 2010 and increasing again since 2014 but still hasn’t surprased the previous record.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Package size tends to increase except for the brief period 2010-2014.
Currently it increases less than before that period but is close to its maximum.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;conclusions&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Most packages are not updated too much, between 1 and 3 times.
But there are packages that are updated quite a lot, this might mean they are data packages and not software packages or that they have frequent minor and major updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Most current packages have smaller size than those archived.
Packages no longer available usually had bigger size than those packages still on CRAN.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Surprisingly packages increase their size a lot till the 25 release.
But also with time except for a period in 2010 and 2014.
This decreasing period might be due to a change in CRAN policy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id=&#34;future-parts&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Future parts&lt;/h2&gt;
&lt;p&gt;On future posts I’ll explore:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;patterns accepting packages and updates in packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the relation between dependencies, initial release and updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;who handled the packages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Exploring CRAN&#39;s files: part 1</title>
      <link>https://llrs.dev/post/2022/07/23/cran-files-1/</link>
      <pubDate>Sat, 23 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2022/07/23/cran-files-1/</guid>
      <description>


&lt;div id=&#34;introduction&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;There are many great things in base R, one of them is the &lt;a href=&#34;https://search.r-project.org/R/refmans/tools/html/00Index.html&#34;&gt;tools package&lt;/a&gt;.
This package has the functions that are used to build, check and create packages, documentation and manuals.&lt;/p&gt;
&lt;p&gt;As I wanted to know how CRAN works and its changes I was looking into the source code of tools.
I found some internal functions that access freely available files with information about CRAN packages.
These private functions are at the &lt;a href=&#34;https://svn.r-project.org/R/trunk/src/library/tools/R/CRANtools.R&#34;&gt;CRANtools.R file&lt;/a&gt;.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;packages &amp;lt;- tools::CRAN_package_db()
# current &amp;lt;- tools:::CRAN_current_db()
# archive &amp;lt;- tools:::CRAN_archive_db()
# issues &amp;lt;- tools::CRAN_check_issues()
# alias &amp;lt;- tools:::CRAN_aliases_db()
# rdxrefs &amp;lt;- tools:::CRAN_rdxrefs_db()&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As I was not sure of the information on these files I asked on &lt;a href=&#34;https://stat.ethz.ch/pipermail/r-devel/2022-May/081770.html&#34;&gt;R-devel&lt;/a&gt; but I did not receive an answer.
They seem to be quite obscure and as private functions might be removed without notice and shouldn’t be used in any dependency.
However, as the files contain information about CRAN they might provide interesting clues about the history of CRAN and how it is operated.&lt;/p&gt;
&lt;p&gt;On this post I will focus on the first file.
I’ll explore a couple of fields and in future posts I will use the other files to explore more about CRAN history.&lt;/p&gt;
&lt;div id=&#34;packages-file&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;packages file&lt;/h3&gt;
&lt;p&gt;First of all a very brief exploration of what is in this file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;##    Package Version Priority                        Depends
## 1       A3   1.0.0     &amp;lt;NA&amp;gt; R (&amp;gt;= 2.15.0), xtable, pbapply
## 2 AATtools   0.0.1     &amp;lt;NA&amp;gt;                   R (&amp;gt;= 3.6.0)
## 3   ABACUS   1.0.0     &amp;lt;NA&amp;gt;                   R (&amp;gt;= 3.1.0)
##                                 Imports LinkingTo
## 1                                  &amp;lt;NA&amp;gt;      &amp;lt;NA&amp;gt;
## 2  magrittr, dplyr, doParallel, foreach      &amp;lt;NA&amp;gt;
## 3 ggplot2 (&amp;gt;= 3.1.0), shiny (&amp;gt;= 1.3.1),      &amp;lt;NA&amp;gt;
##                               Suggests Enhances    License License_is_FOSS
## 1                  randomForest, e1071     &amp;lt;NA&amp;gt; GPL (&amp;gt;= 2)            &amp;lt;NA&amp;gt;
## 2                                 &amp;lt;NA&amp;gt;     &amp;lt;NA&amp;gt;      GPL-3            &amp;lt;NA&amp;gt;
## 3 rmarkdown (&amp;gt;= 1.13), knitr (&amp;gt;= 1.22)     &amp;lt;NA&amp;gt;      GPL-3            &amp;lt;NA&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Packages has similar information as &lt;code&gt;available.packages()&lt;/code&gt; but with many more columns with published date, reverse dependencies, X-CRAN-Comment, who packaged it…
Also note that all this packages are not filtered to match R version, OS_type, subarch and there are almost duplicates (I learned about this filtering while reading the great documentation of &lt;a href=&#34;https://search.r-project.org/R/refmans/utils/html/available.packages.html&#34;&gt;&lt;code&gt;available.packages()&lt;/code&gt;&lt;/a&gt; and also finding some mentions online).&lt;/p&gt;
&lt;p&gt;As we have data from several years I’ll sometimes show the release dates of different R versions to provide some context.
Without further delay let’s explore the data!&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;accepted&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Published packages&lt;/h2&gt;
&lt;p&gt;CRAN started some time ago (in 1997) but it hasn’t remained frozen.
The package archive (the A in CRAN) has been updating since then.
For instance the current packages do not include packages that were removed, archived or those replaced by updates.&lt;/p&gt;
&lt;p&gt;First packages are submitted to CRAN and once accepted they are published.
As accepted and published usually are almost instantaneous I might use them as synonyms.
Looking at the current available packages and their publication date, we can see the following:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:daily-cran&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/daily-cran-1.png&#34; alt=&#34;ggplot2 plot of date vs packages accepted on a given day. Until2020 less than 10 packages were accepted daily. Lately more than 30 are added to CRAN. The plot also displays the R release versions from 2.12 in 2010 to 4.2.0 in 2022.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 1: &lt;em&gt;Packages accepted on CRAN by the publication date.&lt;/em&gt;
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;The oldest package added was in 2010.
This means a package without issues, dependencies changes, bugs detected by the automatic checks since 12 years!&lt;/p&gt;
&lt;p&gt;The daily rate of acceptance has increased from less than 10 a day till 2020 to more than 30 this year 2022.
If we summarize that information for month we see the same, but the little bump in 2020 disappears but we see other patterns:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:monthly-cran&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/monthly-cran-1.png&#34; alt=&#34;ggplot figure with the monthly published packages. till 2015 it raises very slowly, then in is around 50 monthly packages and there are some wobbles. In 2022 it raised to over 800 packages.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 2: &lt;em&gt;Monthly packages published to CRAN&lt;/em&gt;. Some monthly variance is observed.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Instead of just one bump we see some waves with less packages on CRAN accepted late in the year and an increase of packages the first months of the year.&lt;/p&gt;
&lt;p&gt;If we look at the accumulated packages on CRAN we see an exponential growth:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:cran-cumsum&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/cran-cumsum-1.png&#34; alt=&#34;Plot with the accumulative number of packages in CRAN. Raising from a few 10 to currently more than 18000.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 3: &lt;em&gt;Acumulation of packages&lt;/em&gt;. Most of the packages have been published in the last 2 years.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;In fact, most packages currently on CRAN where added since March 2021 than all the previous years.&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:cran-perc&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/cran-perc-1.png&#34; alt=&#34;Line with percentages of packages in CRAN by date. Close to 50% of current packages were published between 2010 and 2021.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 4: &lt;em&gt;Percentage of current packages on CRAN according to their date of publication&lt;/em&gt;. Most of them were published/updated on the last year and a half.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;This is a good time to remind that the date being used is the date of publication of this version of the packages.
Many had previous versions on CRAN:&lt;/p&gt;
&lt;template id=&#34;9668142b-64d5-4c3d-842e-fbcef8304c16&#34;&gt;&lt;style&gt;
.tabwid table{
  border-spacing:0px !important;
  border-collapse:collapse;
  line-height:1;
  margin-left:auto;
  margin-right:auto;
  border-width: 0;
  display: table;
  margin-top: 1.275em;
  margin-bottom: 1.275em;
  border-color: transparent;
}
.tabwid_left table{
  margin-left:0;
}
.tabwid_right table{
  margin-right:0;
}
.tabwid td {
    padding: 0;
}
.tabwid a {
  text-decoration: none;
}
.tabwid thead {
    background-color: transparent;
}
.tabwid tfoot {
    background-color: transparent;
}
.tabwid table tr {
background-color: transparent;
}
&lt;/style&gt;&lt;div class=&#34;tabwid&#34;&gt;&lt;style&gt;.cl-3baefb4c{}.cl-3ba22c8c{font-family:&#39;DejaVu Sans&#39;;font-size:11pt;font-weight:normal;font-style:normal;text-decoration:none;color:rgba(0, 0, 0, 1.00);background-color:transparent;}.cl-3ba253e2{margin:0;text-align:left;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);padding-bottom:5pt;padding-top:5pt;padding-left:5pt;padding-right:5pt;line-height: 1;background-color:transparent;}.cl-3ba253ec{margin:0;text-align:right;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);padding-bottom:5pt;padding-top:5pt;padding-left:5pt;padding-right:5pt;line-height: 1;background-color:transparent;}.cl-3ba2b7e2{width:88.3pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-3ba2b7f6{width:72.5pt;background-color:transparent;vertical-align: middle;border-bottom: 0 solid rgba(0, 0, 0, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-3ba2b7f7{width:88.3pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-3ba2b800{width:72.5pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 0 solid rgba(0, 0, 0, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-3ba2b80a{width:88.3pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 2pt solid rgba(102, 102, 102, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}.cl-3ba2b814{width:72.5pt;background-color:transparent;vertical-align: middle;border-bottom: 2pt solid rgba(102, 102, 102, 1.00);border-top: 2pt solid rgba(102, 102, 102, 1.00);border-left: 0 solid rgba(0, 0, 0, 1.00);border-right: 0 solid rgba(0, 0, 0, 1.00);margin-bottom:0;margin-top:0;margin-left:0;margin-right:0;}&lt;/style&gt;&lt;table class=&#39;cl-3baefb4c&#39;&gt;
&lt;thead&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-3ba2b80a&#34;&gt;&lt;p class=&#34;cl-3ba253e2&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;First release&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-3ba2b814&#34;&gt;&lt;p class=&#34;cl-3ba253ec&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;Packages&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-3ba2b7e2&#34;&gt;&lt;p class=&#34;cl-3ba253e2&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;No&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-3ba2b7f6&#34;&gt;&lt;p class=&#34;cl-3ba253ec&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;14,294&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style=&#34;overflow-wrap:break-word;&#34;&gt;&lt;td class=&#34;cl-3ba2b7f7&#34;&gt;&lt;p class=&#34;cl-3ba253e2&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;Yes&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;td class=&#34;cl-3ba2b800&#34;&gt;&lt;p class=&#34;cl-3ba253ec&#34;&gt;&lt;span class=&#34;cl-3ba22c8c&#34;&gt;4,113&lt;/span&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;&lt;/template&gt;
&lt;div class=&#34;flextable-shadow-host&#34; id=&#34;1027b3f4-86a2-414b-90aa-a3bab733e0c0&#34;&gt;&lt;/div&gt;
&lt;script&gt;
var dest = document.getElementById(&#34;1027b3f4-86a2-414b-90aa-a3bab733e0c0&#34;);
var template = document.getElementById(&#34;9668142b-64d5-4c3d-842e-fbcef8304c16&#34;);
var caption = template.content.querySelector(&#34;caption&#34;);
if(caption) {
  caption.style.cssText = &#34;display:block;text-align:center;&#34;;
  var newcapt = document.createElement(&#34;p&#34;);
  newcapt.appendChild(caption)
  dest.parentNode.insertBefore(newcapt, dest.previousSibling);
}
var fantome = dest.attachShadow({mode: &#39;open&#39;});
var templateContent = template.content;
fantome.appendChild(templateContent);
&lt;/script&gt;

&lt;/div&gt;
&lt;div id=&#34;delays&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Processing time&lt;/h2&gt;
&lt;p&gt;Previously I found that &lt;a href=&#34;https://llrs.dev/post/2021/01/31/cran-review/&#34;&gt;CRAN submissions&lt;/a&gt; present some key differences between new packages and already published packages which impact how long do they need to wait to be published on CRAN.
With the existing data we can compare how fast is the process by comparing the published date with the build date.&lt;/p&gt;
&lt;p&gt;The build date is added to the tar.gz file automatically when the developer builds the package via &lt;code&gt;R CMD build&lt;/code&gt;. However, the published date is set by CRAN once the packages are accepted on CRAN.&lt;/p&gt;
&lt;p&gt;To visualize the differences I will also compare if there is some difference with new packages and those that were already on CRAN:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:cran-delays&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/cran-delays-1.png&#34; alt=&#34;Histogram of packages and the time between build and publication. They take less than 50 days usually.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 5: &lt;em&gt;Histogram of time difference between building and publishing a package.&lt;/em&gt; Color indicates if the package is new to CRAN or not. Most of the published packages take more or less the same time regardless of if it is the first time or not.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;There doesn’t seem to be much difference between date of building and date of publication according to if it is the first release or not.
The precision is just a day and this is usually a fast process well below 50 days.
Few packages exceed spend so much after build before publication and they are too few to be noticeable at this scale.
Since 2016/05/02 there is a &lt;a href=&#34;https://github.com/r-devel/r-svn/blob/676c1183801648b68f8f6719701445b2f9a5e3fd/src/library/tools/R/QC.R#L7583&#34;&gt;check&lt;/a&gt; that raises an issue if the build is older than a month.&lt;/p&gt;
&lt;p&gt;Note that one might need to build multiple times the package before it is accepted.
Packages published for the first time on CRAN might have been submitted previously, but when they finally built and pass the checks and manual review they are handled as fast as packages already on CRAN.&lt;/p&gt;
&lt;p&gt;However, this time between build and acceptance might have changed with time:&lt;/p&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:cran-delays2&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/cran-delays2-1.png&#34; alt=&#34;Smoothed lines of published packages with different linetype and color depending on if it is the first time they are on CRAN or not. New packages currently take less than 4 days and old packages less than 2. This is down from 2018 to 2021, when new packages took above 4 days to be published on CRAN&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 6: &lt;em&gt;Processing time between building the package and being published by date.&lt;/em&gt; There is a high difference between new packages and old ones. New packages usually take more time while existing packages take less than a day currently.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;We clearly see a difference in processing time for those packages already on CRAN and those that are not.
Keep in mind that for the few packages from before 2016 the estimation might not be accurate.
At the same time this is consistent with the manual review process (For more information see &lt;a href=&#34;https://llrs.dev/post/2021/01/31/cran-review/&#34;&gt;my previous post&lt;/a&gt; about the review process of CRAN or my &lt;a href=&#34;https://llrs.dev/talk/user-2021/&#34;&gt;talk at the useR2021&lt;/a&gt;).
It also means that there is a huge variation of time about how packages are handled.
However this seems to be reducing: while in 2010 it took around 2 weeks, nowadays it takes less than a week and getting closer to a 1 day of median time between a package being built and appearing on CRAN that takes for existing packages.&lt;/p&gt;
&lt;p&gt;This difference might be explainable due to experience: authors and maintainers whose package(s) are already in CRAN know better how to submit a new version without problems the checks.&lt;/p&gt;
&lt;p&gt;It could also be that new packages need more time from the CRAN team.
In 2020 we see it took longer than in previous years for packages to be added on CRAN.
Maybe the increase in the processing time in 2020 was due the huge volume of submissions CRAN received or more checks on the developer side before submitting it to CRAN.&lt;/p&gt;
&lt;p&gt;Both explanations are not mutually exclusive.&lt;/p&gt;
&lt;details&gt;
&lt;summary&gt;
More packages published the same day mean more processing time? It doesn’t look like it.
&lt;/summary&gt;
&lt;div class=&#34;figure&#34;&gt;&lt;span style=&#34;display:block;&#34; id=&#34;fig:cran-reasons&#34;&gt;&lt;/span&gt;
&lt;img src=&#34;https://llrs.dev/post/2022/07/23/cran-files-1/index.en_files/figure-html/cran-reasons-1.png&#34; alt=&#34;ggplot graphic with the time of processing time and the number of packages accepted the same day. New packages have less delay than already published packages, but the more packages are accepted, the less delay there is.&#34; width=&#34;672&#34; /&gt;
&lt;p class=&#34;caption&#34;&gt;
Figure 7: &lt;em&gt;Packages accepted the same day and processing time.&lt;/em&gt;New packages are accepted sooner than packages on CRAN respect to the builddate.
&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Surprisingly, we see a lot of variation on the delay of packages already accepted on CRAN.
In addition, the more new packages accepted the same day, the less delay there is.
I think this just means that when reviewers work on the submission queue several packages might be approved.&lt;/p&gt;
&lt;p&gt;This might also mean packages have already been built several times before finally being accepted and now the errors, warnings and notes have been solved.
Last, this could indicate that developers with their package already on CRAN wait a bit between building and submitting the package as the developer might be taking some time to double check before submission (dependencies, on several machines, other?) or a time zone difference (submitting in the noon of a region but at the reviewers night).&lt;/p&gt;
&lt;/details&gt;
&lt;/div&gt;
&lt;div id=&#34;conclusion&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;There are packages that for 12 years have been working without problems despite the several major changes in R (See figure &lt;a href=&#34;#fig:daily-cran&#34;&gt;1&lt;/a&gt;).
This speaks volumes of the packages’ quality, and the backward compatibility that the R core aims and CRAN checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CRAN accepts an incredible amount of packages daily and monthly.
The system and the team are doing an incredible work mostly on their free time (See figure &lt;a href=&#34;#fig:monthly-cran&#34;&gt;2&lt;/a&gt;).
Many thanks!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accepted packages are handled very fast, in less than a week usually (See figure &lt;a href=&#34;#fig:cran-reasons&#34;&gt;7&lt;/a&gt;).
But it is not possible to distinguish alone time in the submission system and time on the developer computer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id=&#34;future-parts&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Future parts&lt;/h2&gt;
&lt;p&gt;We’ve explored a snapshot of current packages and a brief window of all the history of CRAN.
There is much more that can be done with all the other files.&lt;/p&gt;
&lt;p&gt;On future posts I’ll explore:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;patterns accepting packages and updates in packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;who handled the packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Size of packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the relation between dependencies, initial release and updates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Other suggestions?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Edit&lt;/strong&gt;: Many thanks to &lt;a href=&#34;https://masalmon.eu/&#34;&gt;Maëlle Salmon&lt;/a&gt; and &lt;a href=&#34;https://dirk.eddelbuettel.com/&#34;&gt;Dirk Eddelbuettel&lt;/a&gt; for their feedback on an initial version of this series of posts.&lt;/p&gt;
&lt;div id=&#34;reproducibility&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Reproducibility&lt;/h3&gt;
&lt;details&gt;
&lt;pre&gt;&lt;code&gt;## - Session info -------------------------------------------------------------------------------------------------------
##  setting  value
##  version  R version 4.2.1 (2022-06-23)
##  os       Ubuntu 20.04.4 LTS
##  system   x86_64, linux-gnu
##  ui       X11
##  language (EN)
##  collate  C
##  ctype    C
##  tz       Europe/Madrid
##  date     2022-07-23
##  pandoc   2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown)
## 
## - Packages -----------------------------------------------------------------------------------------------------------
##  package      * version    date (UTC) lib source
##  assertthat     0.2.1      2019-03-21 [2] RSPM (R 4.2.0)
##  base64enc      0.1-3      2015-07-28 [2] CRAN (R 4.0.0)
##  blogdown       1.10       2022-05-10 [2] RSPM (R 4.2.0)
##  bookdown       0.27       2022-06-14 [2] RSPM (R 4.2.0)
##  bslib          0.4.0      2022-07-16 [2] RSPM (R 4.2.0)
##  cachem         1.0.6      2021-08-19 [2] RSPM (R 4.2.0)
##  cli            3.3.0      2022-04-25 [2] RSPM (R 4.2.0)
##  codetools      0.2-18     2020-11-04 [2] RSPM (R 4.2.0)
##  colorspace     2.0-3      2022-02-21 [2] RSPM (R 4.2.0)
##  crayon         1.5.1      2022-03-26 [2] RSPM (R 4.2.0)
##  curl           4.3.2      2021-06-23 [2] RSPM (R 4.2.0)
##  data.table     1.14.2     2021-09-27 [2] RSPM (R 4.2.0)
##  DBI            1.1.3      2022-06-18 [2] RSPM (R 4.2.0)
##  digest         0.6.29     2021-12-01 [2] RSPM (R 4.2.0)
##  dplyr        * 1.0.9      2022-04-28 [2] RSPM (R 4.2.0)
##  ellipsis       0.3.2      2021-04-29 [2] RSPM (R 4.2.0)
##  evaluate       0.15       2022-02-18 [2] RSPM (R 4.2.0)
##  fansi          1.0.3      2022-03-24 [2] RSPM (R 4.2.0)
##  farver         2.1.1      2022-07-06 [2] RSPM (R 4.2.0)
##  fastmap        1.1.0      2021-01-25 [2] RSPM (R 4.2.0)
##  flextable    * 0.7.2      2022-06-12 [2] RSPM (R 4.2.0)
##  forcats      * 0.5.1      2021-01-27 [2] RSPM (R 4.2.0)
##  gdtools        0.2.4      2022-02-14 [2] RSPM (R 4.2.0)
##  generics       0.1.3      2022-07-05 [2] RSPM (R 4.2.0)
##  geomtextpath * 0.1.0      2022-01-24 [2] CRAN (R 4.2.1)
##  ggplot2      * 3.3.6.9000 2022-06-29 [2] Github (tidyverse/ggplot2@7571122)
##  ggrepel      * 0.9.1      2021-01-15 [2] RSPM (R 4.2.0)
##  glue           1.6.2      2022-02-24 [2] RSPM (R 4.2.0)
##  gtable         0.3.0      2019-03-25 [2] CRAN (R 4.0.0)
##  highr          0.9        2021-04-16 [2] RSPM (R 4.2.0)
##  htmltools      0.5.3      2022-07-18 [2] RSPM (R 4.2.0)
##  jquerylib      0.1.4      2021-04-26 [2] RSPM (R 4.2.0)
##  jsonlite       1.8.0      2022-02-22 [2] RSPM (R 4.2.0)
##  knitr          1.39       2022-04-26 [2] RSPM (R 4.2.0)
##  labeling       0.4.2      2020-10-20 [2] RSPM (R 4.2.0)
##  lattice        0.20-45    2021-09-22 [3] CRAN (R 4.2.0)
##  lifecycle      1.0.1      2021-09-24 [2] RSPM (R 4.2.0)
##  lubridate    * 1.8.0      2021-10-07 [2] RSPM (R 4.2.0)
##  magrittr       2.0.3      2022-03-30 [2] RSPM (R 4.2.0)
##  Matrix         1.4-1      2022-03-23 [2] RSPM (R 4.2.0)
##  mgcv           1.8-40     2022-03-29 [2] RSPM (R 4.2.0)
##  munsell        0.5.0      2018-06-12 [2] RSPM (R 4.2.0)
##  nlme           3.1-158    2022-06-15 [2] RSPM (R 4.2.0)
##  officer        0.4.3      2022-06-12 [2] RSPM (R 4.2.0)
##  pillar         1.8.0      2022-07-18 [2] RSPM (R 4.2.0)
##  pkgconfig      2.0.3      2019-09-22 [2] RSPM (R 4.2.0)
##  purrr          0.3.4      2020-04-17 [2] RSPM (R 4.2.0)
##  R6             2.5.1      2021-08-19 [2] RSPM (R 4.2.0)
##  Rcpp           1.0.9      2022-07-08 [2] RSPM (R 4.2.0)
##  rlang          1.0.4      2022-07-12 [2] RSPM (R 4.2.0)
##  rmarkdown      2.14       2022-04-25 [2] RSPM (R 4.2.0)
##  rstudioapi     0.13       2020-11-12 [2] RSPM (R 4.2.0)
##  rversions    * 2.1.1      2021-05-31 [2] RSPM (R 4.2.0)
##  sass           0.4.2      2022-07-16 [2] RSPM (R 4.2.0)
##  scales         1.2.0      2022-04-13 [2] RSPM (R 4.2.0)
##  sessioninfo    1.2.2      2021-12-06 [2] RSPM (R 4.2.0)
##  stringi        1.7.8      2022-07-11 [2] RSPM (R 4.2.0)
##  stringr        1.4.0      2019-02-10 [2] RSPM (R 4.2.0)
##  systemfonts    1.0.4      2022-02-11 [2] RSPM (R 4.2.0)
##  textshaping    0.3.6      2021-10-13 [2] RSPM (R 4.2.0)
##  tibble         3.1.7      2022-05-03 [2] RSPM (R 4.2.0)
##  tidyr        * 1.2.0      2022-02-01 [2] RSPM (R 4.2.0)
##  tidyselect     1.1.2      2022-02-21 [2] RSPM (R 4.2.0)
##  utf8           1.2.2      2021-07-24 [2] RSPM (R 4.2.0)
##  uuid           1.1-0      2022-04-19 [2] RSPM (R 4.2.0)
##  vctrs          0.4.1      2022-04-13 [2] RSPM (R 4.2.0)
##  withr          2.5.0      2022-03-03 [2] RSPM (R 4.2.0)
##  xfun           0.31       2022-05-10 [2] RSPM (R 4.2.0)
##  xml2           1.3.3      2021-11-30 [2] RSPM (R 4.2.0)
##  yaml           2.3.5      2022-02-21 [2] RSPM (R 4.2.0)
##  zip            2.2.0      2021-05-31 [2] RSPM (R 4.2.0)
## 
##  [1] /home/lluis/bin/R/4.2.1
##  [2] /usr/lib/R/site-library
##  [3] /usr/lib/R/library
## 
## ----------------------------------------------------------------------------------------------------------------------&lt;/code&gt;&lt;/pre&gt;
&lt;/details&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Upgrading rtweet to 1.0.2</title>
      <link>https://llrs.dev/post/2022/07/04/rtweet-1-0-0/</link>
      <pubDate>Mon, 04 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2022/07/04/rtweet-1-0-0/</guid>
      <description>


&lt;p&gt;In this post I will provide some examples of what has changed between rtweet 0.7.0 and rtweet 1.0.2.
I hope both the changes and this guide will help all users.
I highlight the most important and interesting changes in this blog post, and for a full list of changes you can consult it on the &lt;a href=&#34;https://docs.ropensci.org/rtweet/news/index.html&#34;&gt;NEWS&lt;/a&gt;.&lt;/p&gt;
&lt;div id=&#34;big-breaking-changes&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Big breaking changes&lt;/strong&gt;&lt;/h2&gt;
&lt;div id=&#34;more-consistent-output&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;More consistent output&lt;/h3&gt;
&lt;p&gt;This is probably what will affect the most users.
All functions that return data about tweets&lt;a href=&#34;#fn1&#34; class=&#34;footnote-ref&#34; id=&#34;fnref1&#34;&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt; now return the same columns.&lt;/p&gt;
&lt;p&gt;For example if we search some tweets we’ll get the following columns:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; tweets &amp;lt;- search_tweets(&amp;quot;weather&amp;quot;)
&amp;gt; colnames(tweets)
 [1] &amp;quot;created_at&amp;quot;                    &amp;quot;id&amp;quot;                           
 [3] &amp;quot;id_str&amp;quot;                        &amp;quot;full_text&amp;quot;                    
 [5] &amp;quot;truncated&amp;quot;                     &amp;quot;display_text_range&amp;quot;           
 [7] &amp;quot;entities&amp;quot;                      &amp;quot;metadata&amp;quot;                     
 [9] &amp;quot;source&amp;quot;                        &amp;quot;in_reply_to_status_id&amp;quot;        
[11] &amp;quot;in_reply_to_status_id_str&amp;quot;     &amp;quot;in_reply_to_user_id&amp;quot;          
[13] &amp;quot;in_reply_to_user_id_str&amp;quot;       &amp;quot;in_reply_to_screen_name&amp;quot;      
[15] &amp;quot;geo&amp;quot;                           &amp;quot;coordinates&amp;quot;                  
[17] &amp;quot;place&amp;quot;                         &amp;quot;contributors&amp;quot;                 
[19] &amp;quot;is_quote_status&amp;quot;               &amp;quot;retweet_count&amp;quot;                
[21] &amp;quot;favorite_count&amp;quot;                &amp;quot;favorited&amp;quot;                    
[23] &amp;quot;retweeted&amp;quot;                     &amp;quot;lang&amp;quot;                         
[25] &amp;quot;quoted_status_id&amp;quot;              &amp;quot;quoted_status_id_str&amp;quot;         
[27] &amp;quot;quoted_status&amp;quot;                 &amp;quot;possibly_sensitive&amp;quot;           
[29] &amp;quot;retweeted_status&amp;quot;              &amp;quot;text&amp;quot;                         
[31] &amp;quot;favorited_by&amp;quot;                  &amp;quot;scopes&amp;quot;                       
[33] &amp;quot;display_text_width&amp;quot;            &amp;quot;quoted_status_permalink&amp;quot;      
[35] &amp;quot;quote_count&amp;quot;                   &amp;quot;timestamp_ms&amp;quot;                 
[37] &amp;quot;reply_count&amp;quot;                   &amp;quot;filter_level&amp;quot;                 
[39] &amp;quot;query&amp;quot;                         &amp;quot;withheld_scope&amp;quot;               
[41] &amp;quot;withheld_copyright&amp;quot;            &amp;quot;withheld_in_countries&amp;quot;        
[43] &amp;quot;possibly_sensitive_appealable&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;rtweet now minimizes the processing of tweets and only returns the same data as provided by the API while making it easier to handle by R.
However, to preserve the nested nature of the data returned some fields are now nested inside other.
For example, previously fields &lt;code&gt;&#34;bbpx_coords&#34;&lt;/code&gt;, &lt;code&gt;&#34;geo_coords&#34;&lt;/code&gt;, &lt;code&gt;&#34;coords_coords&#34;&lt;/code&gt; were returned as separate columns, but they are now nested inside &lt;code&gt;&#34;place&#34;&lt;/code&gt;, &lt;code&gt;&#34;coordinates&#34;&lt;/code&gt; or &lt;code&gt;&#34;geo&#34;&lt;/code&gt; depending where they are provided.
Some columns previously calculated by rtweet are now not returned, like &lt;code&gt;&#34;rtweet_favorite_count&#34;&lt;/code&gt;.
At the same time it provides with new columns about each tweet like the &lt;code&gt;&#34;withheld_*&#34;&lt;/code&gt; columns.&lt;/p&gt;
&lt;p&gt;If you scanned through the columns you might have noticed that columns &lt;code&gt;&#34;user_id&#34;&lt;/code&gt; and &lt;code&gt;&#34;screen_name&#34;&lt;/code&gt; are no longer returned.
This data is still returned by the API but it is now made available to the rtweet users via &lt;code&gt;users_data()&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; colnames(users_data(tweets))
 [1] &amp;quot;id&amp;quot;                      &amp;quot;id_str&amp;quot;                 
 [3] &amp;quot;name&amp;quot;                    &amp;quot;screen_name&amp;quot;            
 [5] &amp;quot;location&amp;quot;                &amp;quot;description&amp;quot;            
 [7] &amp;quot;url&amp;quot;                     &amp;quot;protected&amp;quot;              
 [9] &amp;quot;followers_count&amp;quot;         &amp;quot;friends_count&amp;quot;          
[11] &amp;quot;listed_count&amp;quot;            &amp;quot;created_at&amp;quot;             
[13] &amp;quot;favourites_count&amp;quot;        &amp;quot;verified&amp;quot;               
[15] &amp;quot;statuses_count&amp;quot;          &amp;quot;profile_image_url_https&amp;quot;
[17] &amp;quot;profile_banner_url&amp;quot;      &amp;quot;default_profile&amp;quot;        
[19] &amp;quot;default_profile_image&amp;quot;   &amp;quot;withheld_in_countries&amp;quot;  
[21] &amp;quot;derived&amp;quot;                 &amp;quot;withheld_scope&amp;quot;         
[23] &amp;quot;entities&amp;quot; &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This blog post should help you find the right data columns, but if you don’t find what you are looking for it might be nested inside a column.&lt;br /&gt;
Try using &lt;code&gt;dplyr::glimpse()&lt;/code&gt; to explore the data and locate nested columns.
For example the entities column (which is present in both tweets and users) have the following useful columns:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; names(tweets$entities[[1]])
[1] &amp;quot;hashtags&amp;quot;      &amp;quot;symbols&amp;quot;       &amp;quot;user_mentions&amp;quot; &amp;quot;urls&amp;quot;         
[5] &amp;quot;media&amp;quot; &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Similarly if you look up a user via &lt;code&gt;search_users()&lt;/code&gt; or &lt;code&gt;lookup_users()&lt;/code&gt; you’ll get consistent data:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; users &amp;lt;- lookup_users(c(&amp;quot;twitter&amp;quot;, &amp;quot;rladiesglobal&amp;quot;, &amp;quot;_R_Foundation&amp;quot;))
&amp;gt; colnames(users)
 [1] &amp;quot;id&amp;quot;                      &amp;quot;id_str&amp;quot;                 
 [3] &amp;quot;name&amp;quot;                    &amp;quot;screen_name&amp;quot;            
 [5] &amp;quot;location&amp;quot;                &amp;quot;description&amp;quot;            
 [7] &amp;quot;url&amp;quot;                     &amp;quot;protected&amp;quot;              
 [9] &amp;quot;followers_count&amp;quot;         &amp;quot;friends_count&amp;quot;          
[11] &amp;quot;listed_count&amp;quot;            &amp;quot;created_at&amp;quot;             
[13] &amp;quot;favourites_count&amp;quot;        &amp;quot;verified&amp;quot;               
[15] &amp;quot;statuses_count&amp;quot;          &amp;quot;profile_image_url_https&amp;quot;
[17] &amp;quot;profile_banner_url&amp;quot;      &amp;quot;default_profile&amp;quot;        
[19] &amp;quot;default_profile_image&amp;quot;   &amp;quot;withheld_in_countries&amp;quot;  
[21] &amp;quot;derived&amp;quot;                 &amp;quot;withheld_scope&amp;quot;         
[23] &amp;quot;entities&amp;quot;               &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can use &lt;code&gt;tweets_data()&lt;/code&gt; to retrieve information about their latest tweet:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; colnames(tweets_data(users))
 [1] &amp;quot;created_at&amp;quot;                    &amp;quot;id&amp;quot;                           
 [3] &amp;quot;id_str&amp;quot;                        &amp;quot;text&amp;quot;                         
 [5] &amp;quot;truncated&amp;quot;                     &amp;quot;entities&amp;quot;                     
 [7] &amp;quot;source&amp;quot;                        &amp;quot;in_reply_to_status_id&amp;quot;        
 [9] &amp;quot;in_reply_to_status_id_str&amp;quot;     &amp;quot;in_reply_to_user_id&amp;quot;          
[11] &amp;quot;in_reply_to_user_id_str&amp;quot;       &amp;quot;in_reply_to_screen_name&amp;quot;      
[13] &amp;quot;geo&amp;quot;                           &amp;quot;coordinates&amp;quot;                  
[15] &amp;quot;place&amp;quot;                         &amp;quot;contributors&amp;quot;                 
[17] &amp;quot;is_quote_status&amp;quot;               &amp;quot;retweet_count&amp;quot;                
[19] &amp;quot;favorite_count&amp;quot;                &amp;quot;favorited&amp;quot;                    
[21] &amp;quot;retweeted&amp;quot;                     &amp;quot;lang&amp;quot;                         
[23] &amp;quot;retweeted_status&amp;quot;              &amp;quot;possibly_sensitive&amp;quot;           
[25] &amp;quot;quoted_status&amp;quot;                 &amp;quot;display_text_width&amp;quot;           
[27] &amp;quot;user&amp;quot;                          &amp;quot;full_text&amp;quot;                    
[29] &amp;quot;favorited_by&amp;quot;                  &amp;quot;scopes&amp;quot;                       
[31] &amp;quot;display_text_range&amp;quot;            &amp;quot;quoted_status_id&amp;quot;             
[33] &amp;quot;quoted_status_id_str&amp;quot;          &amp;quot;quoted_status_permalink&amp;quot;      
[35] &amp;quot;quote_count&amp;quot;                   &amp;quot;timestamp_ms&amp;quot;                 
[37] &amp;quot;reply_count&amp;quot;                   &amp;quot;filter_level&amp;quot;                 
[39] &amp;quot;metadata&amp;quot;                      &amp;quot;query&amp;quot;                        
[41] &amp;quot;withheld_scope&amp;quot;                &amp;quot;withheld_copyright&amp;quot;           
[43] &amp;quot;withheld_in_countries&amp;quot;         &amp;quot;possibly_sensitive_appealable&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can merge them via:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;users_and_last_tweets &amp;lt;- cbind(users, id_str = tweets_data(users)[, &amp;quot;id_str&amp;quot;])&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the future (&lt;a href=&#34;#future&#34;&gt;see below&lt;/a&gt;), with helper functions managing the output of rtweet will become easier.&lt;/p&gt;
&lt;p&gt;Finally, &lt;code&gt;get_followers()&lt;/code&gt; and &lt;code&gt;get_friends()&lt;/code&gt; now return the same columns:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;&amp;gt; colnames(get_followers(&amp;quot;_R_Foundation&amp;quot;))
[1] &amp;quot;from_id&amp;quot; &amp;quot;to_id&amp;quot;  
&amp;gt; colnames(get_friends(&amp;quot;_R_Foundation&amp;quot;))
[1] &amp;quot;from_id&amp;quot; &amp;quot;to_id&amp;quot;  &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will make it easier to build networks of connections (although you might want to convert screen names to ids or vice versa).&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;more-consistent-interface&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;More consistent interface&lt;/h3&gt;
&lt;p&gt;All paginated functions that don’t return tweets now use a consistent pagination interface (except the premium endpoints).
They all store the “next cursor” in an &lt;code&gt;rtweet_cursor&lt;/code&gt; attribute, which will be automatically retrieved when you use the &lt;code&gt;cursor&lt;/code&gt; argument.
This will make it easier to continue a query you started:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;users &amp;lt;- get_followers(&amp;quot;_R_Foundation&amp;quot;)
users
     
# use `cursor` to find the next &amp;quot;page&amp;quot; of results
more_users &amp;lt;- get_followers(&amp;quot;_R_Foundation&amp;quot;, cursor = users)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;They support &lt;code&gt;max_id&lt;/code&gt; and &lt;code&gt;since_id&lt;/code&gt; to find earlier and later tweets respectively:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;# Retrieve all the tweets made since the previous request
newer &amp;lt;- search_tweets(&amp;quot;weather&amp;quot;, since_id = tweets)
# Retrieve tweets made before the previous request
older &amp;lt;- search_tweets(&amp;quot;weather&amp;quot;, max_id = tweets)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want more tweets than it is allowed by the rate limits of the API, you can use &lt;code&gt;retryonratelimit&lt;/code&gt; to wait as long as needed:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;long &amp;lt;- search_tweets(&amp;quot;weather&amp;quot;, n = 1000, retryonratelimit = TRUE)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will keep busy your terminal until the 1000 tweets are retrieved.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;saving-data&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Saving data&lt;/h3&gt;
&lt;p&gt;An unexpected consequence of returning more data (now matching that returned by the API) is that it is harder to save it in a tabular format.
For instance one tweet might have one media, mention two users and have three hashtags.
There isn’t a simple way to save it in a single row uniformly for all tweets or
it could lead to confusion.&lt;/p&gt;
&lt;p&gt;This resulted in deprecating &lt;code&gt;save_as_csv&lt;/code&gt;, &lt;code&gt;read_twitter_csv&lt;/code&gt; and related functions because they don’t work with the new data structure and it won’t be possible to load the complete data from a csv.
They will be removed in later versions.&lt;/p&gt;
&lt;p&gt;Many users will benefit from saving to RDS (e.g., &lt;code&gt;saveRDS()&lt;/code&gt; or &lt;code&gt;readr::write_rds()&lt;/code&gt;), and those wanting to export to tabular format can simplify the data to include only that of interest before saving with generic R functions (e.g., &lt;code&gt;write.csv()&lt;/code&gt; or &lt;code&gt;readr::write_csv()&lt;/code&gt;).&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;other-breaking-changes&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Other breaking changes&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Accessibility is important and for this reason if you tweet via &lt;code&gt;post_tweet()&lt;/code&gt; and add an image, gif or video you’ll need to provide the media alternative text.
Without &lt;code&gt;media_alt_text&lt;/code&gt; it will not allow you to post.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;tweet_shot()&lt;/code&gt; has been deprecated as it no longer works correctly.
It might be possible to bring it back, but the code is complex and I do not understand enough to maintain it.
If you’re interested in seeing this feature return, checkout the discussion about this &lt;a href=&#34;https://github.com/ropensci/rtweet/issues/458&#34;&gt;issue&lt;/a&gt; and let me know if you have any suggestions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;rtweet also used to provide functions for data on &lt;code&gt;emojis&lt;/code&gt;, &lt;code&gt;langs&lt;/code&gt; and &lt;code&gt;stopwordslangs&lt;/code&gt;.
These are useful resources for text mining in general - not only in tweets - however they need to be updated to be helpful and would be better placed in other packages, for instance emojis is now on the &lt;a href=&#34;https://cran.r-project.org/package=bdpar&#34;&gt;bdpar package&lt;/a&gt;.
Therefore they are no longer available in rtweet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The functions like &lt;code&gt;suggested_*()&lt;/code&gt; have been removed as they have been broken since 2019.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id=&#34;easier-authentication&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Easier authentication&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;An exciting part of this release has been a big rewrite of the authentication protocol.
While it is compatible with previous rtweet authentication methods it has also some important new functions which make it easier to work with rtweet and the twitter API in different ways.&lt;/p&gt;
&lt;div id=&#34;different-ways-to-authenticate&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Different ways to authenticate&lt;/h3&gt;
&lt;p&gt;If you just want to test the package, use the default authentication &lt;code&gt;auth_setup_default()&lt;/code&gt; that comes with rtweet.
If you use it for one or two days you won’t notice any problem.&lt;/p&gt;
&lt;p&gt;If you want to use the package for more than a couple of days, I recommend you set up your own token via &lt;code&gt;rtweet_user()&lt;/code&gt;.
It will open a window to authenticate via the authenticated account in your default browser.
This authentication won’t allow you to do everything but it will avoid running out of requests and being rate-limited.&lt;/p&gt;
&lt;p&gt;If you plan to make heavy use of the package, I recommend registering yourself as developer and using one of the following two mechanisms, depending on your plans:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collect data and analyze: &lt;code&gt;rtweet_app()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Set up a bot: &lt;code&gt;rtweet_bot()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Find more information in the &lt;a href=&#34;https://docs.ropensci.org/rtweet/articles/auth.html&#34;&gt;Authentication with rtweet vignette&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;storing-credentials&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Storing credentials&lt;/h3&gt;
&lt;p&gt;Previously rtweet saved each token created, but now non-default tokens are only saved if you ask. You can save them manually via &lt;code&gt;auth_save(token, &#34;my_app&#34;)&lt;/code&gt;.
Bonus, if you name your token as default (&lt;code&gt;auth_save(token, &#34;default&#34;)&lt;/code&gt;) it will be used automatically upon loading the library.&lt;/p&gt;
&lt;p&gt;Further, tokens are now saved in the location output by &lt;code&gt;tools::R_user_dir(&#34;rtweet&#34;, &#34;config&#34;)&lt;/code&gt;, rather than in your home directory.
If you have previous tokens saved or problems identifying which token is which use &lt;code&gt;auth_sitrep()&lt;/code&gt;.
This will provides clues to which tokens might be duplicated or misconfigured but it won’t check if they work.
It will also automatically move your tokens to the new path.&lt;/p&gt;
&lt;p&gt;To check which credentials you have stored use &lt;code&gt;auth_list()&lt;/code&gt; and load them via &lt;code&gt;auth_as(&#34;my_app&#34;)&lt;/code&gt;.
All the rtweet functions will use the latest token loaded with &lt;code&gt;auth_as&lt;/code&gt; (unless you manually specify one when calling it).
If you are not sure which token you are using you can use &lt;code&gt;auth_get()&lt;/code&gt; it will return the token in use, list them or ask you to authenticate.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;other-changes-of-note&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Other changes of note&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is a list of other changes that aren’t too big or are not breaking changes but are worthy enough of a mention:&lt;/p&gt;
&lt;div id=&#34;iteration-and-continuation-of-requests&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Iteration and continuation of requests&lt;/h3&gt;
&lt;p&gt;Using cursors, pagination or waiting until you can make more queries is now easier.
For example you can continue previous requests via:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;users &amp;lt;- get_followers(&amp;quot;_R_Foundation&amp;quot;)
users

# use `cursor` to find the next &amp;quot;page&amp;quot; of results
more_users &amp;lt;- get_followers(&amp;quot;_R_Foundation&amp;quot;, cursor = users)&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div id=&#34;additions&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Additions&lt;/h3&gt;
&lt;p&gt;There is now a function to find a thread of a user.
You can start from any tweet and it will find all the tweets of the thread:
&lt;code&gt;tweet_threading(&#34;1461776330584956929&#34;)&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;There is a lot of interest in downloading and keeping track of interactions on Twitter.
The amount of interest is big enough that Twitter is releasing a new API to provide more information of this nature.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;future&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Future&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Twitter API v2 is being released and soon it will replace API v1.
rtweet up to now, including this release, uses API v1 so it will need to adapt to the new endpoints and new data returned.&lt;/p&gt;
&lt;p&gt;First will be the streaming endpoints in November, so expect more (breaking?) changes around those dates if not earlier.&lt;/p&gt;
&lt;p&gt;I would also like to make it easier for users, dependencies and the package itself to handle the outputs.
To this regard I would like to provide some classes to handle the different type of objects it returns.&lt;/p&gt;
&lt;p&gt;This will help avoid some of the current shortcomings.
Specifically I would like to provide functions to make it easier to reply to previous tweets,
extract nested data, and subset tweets and the accompanying user information.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;conclusions&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;&lt;strong&gt;Conclusions&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;While I made many breaking changes I hope these changes will smooth future development and help both users and maintainers.&lt;/p&gt;
&lt;p&gt;Feel free to ask on the &lt;a href=&#34;https://discuss.ropensci.org/tag/rtweet&#34;&gt;rOpenSci community&lt;/a&gt; if you have questions about the transition or find something amiss.
Please let me know! It will help me prioritize which endpoints are more relevant to the community.
(And yes, the academic archive endpoint is on the radar.)&lt;/p&gt;
&lt;p&gt;It is also possible that I overlooked something and I thought the code is working when it isn’t.
For example, after several months of changing the way the API is parsed, several users found it wasn’t handling some elements.
Let me know of such or similar cases and I’ll try to fix it.&lt;/p&gt;
&lt;p&gt;In case you find a bug, check the open issues and if it has not already been reported, open an &lt;a href=&#34;https://github.com/ropensci/rtweet/issues/&#34;&gt;issue on GitHub&lt;/a&gt;.
Don’t forget to make a &lt;a href=&#34;https://cran.r-project.org/web/packages/reprex/readme/README.html&#34;&gt;reprex&lt;/a&gt; and if possible provide the id of the tweets you are having trouble with.
Unfortunately it has happened that when I came to look at a bug I couldn’t reproduce it as I wasn’t able to find the tweet which caused the error.&lt;/p&gt;
&lt;p&gt;This release includes contributions from Hadely Wicham, Bob Rudis, Alex Hayes, Simon Heß, Diego Hernán, Michael Chirico, Jonathan Sidi, Jon Harmon, Andrew Fraser and many other that reported bugs or provided feedback.
Many thanks all for using it, your interest to keep it working and improving rtweet for all.&lt;/p&gt;
&lt;p&gt;Finally, you can read the whole &lt;a href=&#34;https://docs.ropensci.org/rtweet/news/index.html&#34;&gt;NEWS online&lt;/a&gt; and the examples.&lt;/p&gt;
&lt;p&gt;Happy tweeting!&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;acknowledgements&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Acknowledgements&lt;/h2&gt;
&lt;p&gt;This is a repost of the &lt;a href=&#34;https://ropensci.org/blog/2022/07/21/rtweet-1-0-0/&#34;&gt;entry for rOpenSci&lt;/a&gt;.
The post was edited and improved by Yanina Bellini Saibene and Steffi LaZerte, the community manager and assistant. Many thanks&lt;/p&gt;
&lt;/div&gt;
&lt;div class=&#34;footnotes footnotes-end-of-document&#34;&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id=&#34;fn1&#34;&gt;&lt;p&gt;Specifically these: &lt;code&gt;get_favorites()&lt;/code&gt;, &lt;code&gt;get_favorites_user()&lt;/code&gt;, &lt;code&gt;get_mentions()&lt;/code&gt;,
&lt;code&gt;get_my_timeline()&lt;/code&gt;, &lt;code&gt;get_retweets()&lt;/code&gt;, &lt;code&gt;get_timeline()&lt;/code&gt;, &lt;code&gt;get_timeline_user()&lt;/code&gt;,
&lt;code&gt;lists_statuses()&lt;/code&gt;, &lt;code&gt;lookup_statuses()&lt;/code&gt;, &lt;code&gt;lookup_tweets()&lt;/code&gt;, &lt;code&gt;search_30day()&lt;/code&gt;,
&lt;code&gt;search_fullarchive()&lt;/code&gt;, &lt;code&gt;search_tweets()&lt;/code&gt;, &lt;code&gt;tweet_shot()&lt;/code&gt; and &lt;code&gt;tweet_threading()&lt;/code&gt;.&lt;a href=&#34;#fnref1&#34; class=&#34;footnote-back&#34;&gt;↩︎&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Creating an RSS feed for r-bloggers for my blogdown</title>
      <link>https://llrs.dev/post/2021/12/30/r-blogger-blogdown/</link>
      <pubDate>Thu, 30 Dec 2021 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2021/12/30/r-blogger-blogdown/</guid>
      <description>


&lt;p&gt;I am following &lt;a href=&#34;https://yongfu.name/2018/12/13/hugo_rss.html&#34;&gt;Yongfu’s blog&lt;/a&gt;:
I copy here the content in case it is moved:&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code&gt;Copy the default RSS template&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Change &lt;description&gt;{{ .Summary | html }}&lt;/description&gt; to &lt;description&gt;{{ .Content | html }}&lt;/description&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Create subdirectories in layouts/ and save the RSS template files (name them rss.xml) in them (one for each subdirectory).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I was first confused when I wanted to copy the rss.xml file mentioned and didn’t know where to go.
I finally found the &lt;a href=&#34;https://gohugo.io/templates/rss/#the-embedded-rssxml&#34;&gt;section&lt;/a&gt; from which I could go to the file I was supposed to copy.&lt;/p&gt;
&lt;p&gt;So far so good, so without further testing I assume this worked.
Following &lt;a href=&#34;https://gohugo.io/templates/rss/&#34;&gt;the instructions&lt;/a&gt;, I moved the rss.xml file to &lt;code&gt;layouts/post/&lt;/code&gt; to only report posts of the blog, not new projects or other leafs from Hugo.&lt;/p&gt;
&lt;p&gt;I also created a rss.xml on &lt;code&gt;layouts/categories/&lt;/code&gt;.
As the templates provided by gohugo.io didn’t work I copied and pasted the &lt;a href=&#34;https://github.com/liao961120/Hugo-RSS/tree/master/layouts&#34;&gt;files from Yongfu’s Blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To check that it worked I went to &lt;a href=&#34;https://llrs.dev/categories/cran/index.xml&#34;&gt;&lt;code&gt;llrs.dev/categories/cran/index.xml&lt;/code&gt;&lt;/a&gt;.
It did but not for all the posts. I kept getting an error even when I set a limit of 20 post on the config.toml file &lt;code&gt;rssLimit = 20&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I abandoned this for a while, other projects, pandemia…&lt;/p&gt;
&lt;p&gt;When I came back I &lt;a href=&#34;https://gohugo.io/templates/rss/#the-embedded-rss-xml&#34; title=&#34;Hugo documentation about RSS&#34;&gt;updated the template&lt;/a&gt; on 30/12/2021 and the same error was still happening.
After some digging I found that there is a &lt;a href=&#34;https://github.com/gohugoio/hugo/issues/1740&#34;&gt;bug on Hugo&lt;/a&gt; generating invalid XML code for the feeds.&lt;/p&gt;
&lt;p&gt;I started looking for workarounds but adding a php function or a go function didn’t work well. I couldn’t correctly set up the php function and I don’t know how to write go.&lt;/p&gt;
&lt;p&gt;Then I started thinking on limiting the issue, which let it to rssLimit. After several checks I didn’t manage to make it work.&lt;/p&gt;
&lt;p&gt;But while reading &lt;a href=&#34;https://www.gavinwray.com/2018/05/10/hugo-rss-feed/&#34;&gt;a post&lt;/a&gt; I realized that rssLimit was no longer used (if it was ever on the first place).&lt;/p&gt;
&lt;p&gt;So after adding:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[Services.RSS]
  limit = 10&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;on my config.toml with the new template, the xml feeds are working again.
There is only one problem. The &lt;a href=&#34;https://llrs.dev/tags/twitter/index.xml&#34;&gt;twitter tag feed&lt;/a&gt; doesn’t work, because &lt;a href=&#34;https://llrs.dev/post/2019/08/13/twitter-bot/&#34;&gt;the post that creates&lt;/a&gt; invalid content is there.&lt;/p&gt;
&lt;p&gt;Hope that this save people some time if they try to do that. If you follow the linked posts &lt;a href=&#34;https://coolbutuseless.bitbucket.io/2018/02/07/blogdown-rss-feed-of-full-articles/&#34;&gt;like this one&lt;/a&gt; and take into account this change on how to limit RSS you might avoid the problem.&lt;/p&gt;
&lt;div id=&#34;reproducibility&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Reproducibility&lt;/h3&gt;
&lt;details&gt;
&lt;pre&gt;&lt;code&gt;## ─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────
##  setting  value
##  version  R version 4.2.0 (2022-04-22)
##  os       Ubuntu 20.04.4 LTS
##  system   x86_64, linux-gnu
##  ui       X11
##  language (EN)
##  collate  en_US.UTF-8
##  ctype    en_US.UTF-8
##  tz       Europe/Madrid
##  date     2022-05-09
##  pandoc   2.17.1.1 @ /usr/lib/rstudio/bin/quarto/bin/ (via rmarkdown)
## 
## ─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
##  package     * version date (UTC) lib source
##  blogdown      1.9     2022-03-28 [1] CRAN (R 4.2.0)
##  bookdown      0.26    2022-04-15 [1] CRAN (R 4.2.0)
##  bslib         0.3.1   2021-10-06 [1] CRAN (R 4.2.0)
##  cli           3.3.0   2022-04-25 [1] CRAN (R 4.2.0)
##  digest        0.6.29  2021-12-01 [1] CRAN (R 4.2.0)
##  evaluate      0.15    2022-02-18 [1] CRAN (R 4.2.0)
##  fastmap       1.1.0   2021-01-25 [1] CRAN (R 4.2.0)
##  htmltools     0.5.2   2021-08-25 [1] CRAN (R 4.2.0)
##  jquerylib     0.1.4   2021-04-26 [1] CRAN (R 4.2.0)
##  jsonlite      1.8.0   2022-02-22 [1] CRAN (R 4.2.0)
##  knitr         1.39    2022-04-26 [1] CRAN (R 4.2.0)
##  magrittr      2.0.3   2022-03-30 [1] CRAN (R 4.2.0)
##  R6            2.5.1   2021-08-19 [1] CRAN (R 4.2.0)
##  rlang         1.0.2   2022-03-04 [1] CRAN (R 4.2.0)
##  rmarkdown     2.14    2022-04-25 [1] CRAN (R 4.2.0)
##  rstudioapi    0.13    2020-11-12 [1] CRAN (R 4.2.0)
##  sass          0.4.1   2022-03-23 [1] CRAN (R 4.2.0)
##  sessioninfo   1.2.2   2021-12-06 [1] CRAN (R 4.2.0)
##  stringi       1.7.6   2021-11-29 [1] CRAN (R 4.2.0)
##  stringr       1.4.0   2019-02-10 [1] CRAN (R 4.2.0)
##  xfun          0.30    2022-03-02 [1] CRAN (R 4.2.0)
##  yaml          2.3.5   2022-02-21 [1] CRAN (R 4.2.0)
## 
##  [1] /home/lluis/bin/R/4.2.0/lib/R/library
## 
## ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────&lt;/code&gt;&lt;/pre&gt;
&lt;/details&gt;
&lt;/div&gt;
</description>
    </item>
    
    <item>
      <title>Reasons why packages are archived on CRAN</title>
      <link>https://llrs.dev/post/2021/12/07/reasons-cran-archivals/</link>
      <pubDate>Tue, 07 Dec 2021 00:00:00 +0000</pubDate>
      <guid>https://llrs.dev/post/2021/12/07/reasons-cran-archivals/</guid>
      <description>


&lt;p&gt;On the Repositories working group of the R Consortium Rich FitzJohn posted &lt;a href=&#34;https://github.com/RConsortium/r-repositories-wg/issues/8#issuecomment-979486806&#34;&gt;a comment&lt;/a&gt; to &lt;a href=&#34;https://cran.r-project.org/src/contrib/PACKAGES.in&#34;&gt;a file&lt;/a&gt; that seems to be were the CRAN team stores and uses to check the package history.&lt;/p&gt;
&lt;p&gt;The structure is not defined anywhere I could find (I haven’t looked much to be honest).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Package: &amp;lt;package name&amp;gt;
X-CRAN-Comment: Archived on YYYY-MM-DD as &amp;lt;reason&amp;gt;.
X-CRAN-History: Archived on YYYY-MM-DD as &amp;lt;reason&amp;gt;.
  Unarchived on YYYY-MM-DD.
  .
  &amp;lt;Optional clarification of archival reason&amp;gt;
&amp;lt;Optional fields like License_restricts_use, Replaced_by, Maintainer: ORPHANED, OS_type: unix&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I think the X-CRAN-Comment is what appears on the website of an archived package, like on &lt;a href=&#34;https://cran.r-project.org/package=radix&#34;&gt;radix package&lt;/a&gt;. However, other comments on the website do not appear on that file.&lt;/p&gt;
&lt;p&gt;In addition, the file doesn’t have some records of archiving and unarchiving of some packages, but there are old records from 2013 or before to now. But we can use it to see understand what are the &lt;em&gt;reasons&lt;/em&gt; of archiving packages, which seems to be the main purpose of the file.&lt;/p&gt;
&lt;div id=&#34;the-data&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;The data&lt;/h1&gt;
&lt;p&gt;First step is read the record.
As it seems that it has some &lt;code&gt;key: value&lt;/code&gt; structure similar to DESCRIPTION file of packages it seems it is a DCF format: Debian Control File format which is easy to read with the &lt;code&gt;read.dcf&lt;/code&gt; function.&lt;/p&gt;
&lt;div id=&#34;exploring&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Exploring&lt;/h2&gt;
&lt;p&gt;A brief exploration of the data:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
comment
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
history
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
packages
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
3612
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2345
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
434
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
70
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Many packages have either comments or history but relatively few both.
I’m not sure when either of them is used, as I would expect that all that have history would have a comment.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
Replaced_by
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
packages
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
6360
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
101
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Many packages are simply replaced by some other package.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
Maintainer
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
packages
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
6366
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
95
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Most of the packages that have a Maintainer field are orphaned/archived.
Does it mean that all the others are not orphaned?&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;extracting-reasons&#34; class=&#34;section level2&#34;&gt;
&lt;h2&gt;Extracting reasons&lt;/h2&gt;
&lt;p&gt;Now that it is in R data structure, we can extract the relevant information, dates, type of action and reasons for each archivation event.
I use &lt;code&gt;strcapture&lt;/code&gt; for this task with a regex to extract the action, the date and the explanation it migh have.&lt;/p&gt;
&lt;p&gt;I don’t know how the file is written probably it is a mix of automated tools and manual editing so there isn’t a simple way to collect all the information in a structured way.
Simply because the structure has been changing along the years as well as the details of what is stored has changed, or there are missing events.
However, the extracted information should be enough for our purposes.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
Action
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
Events
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
archived
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
7096
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
orphaned
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
341
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
removed
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
113
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
renamed
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
replaced
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
4
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
unarchived
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2973
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;As expected the most common recorded event are archivations, but there are some orphaned packages and even some removed packages.
Also note the number of orphaned packages is greater than those with the Maintainer field, supporting my theory that the format has changed and that this shouldn’t be taken as an exhaustive and complete analysis of archivations.&lt;/p&gt;
&lt;p&gt;How are they along time?&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://llrs.dev/post/2021/12/07/reasons-cran-archivals/index.en_files/figure-html/plots_df-1.png&#34; width=&#34;864&#34; /&gt;&lt;/p&gt;
&lt;p&gt;Even if there are some events recorded from 2009 it seems that this file has been more used more recently (last commit related to &lt;a href=&#34;https://github.com/wch/r-source/blame/trunk/src/library/tools/R/QC.R#L7778&#34;&gt;this was on 2015&lt;/a&gt;).
I know that there are some old events not recorded on the file, because there are some packages currently present on CRAN that they had been archived but do not have an unarchived action, so conversely it could happen.
So, this doesn’t necessarily mean that there are currently more packages archived from CRAN. But it is a clear indication that now at least there is a more accurate record of archived packages on this file.&lt;/p&gt;
&lt;p&gt;Another source of records of archived packages might be &lt;a href=&#34;http://dirk.eddelbuettel.com/cranberries/cran/removed/&#34;&gt;cranberries&lt;/a&gt;. It would be nice to compare this file with the records on the database there.&lt;/p&gt;
&lt;p&gt;Now that most of the package events are collected and we have the reasons of the actions, we can explore and classify the reasons.
Using some simple regex I explore for key words or sentences.&lt;/p&gt;
&lt;p&gt;We can look at the most frequent error reasons for archiving packages, patterns I found with more than 100 cases:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://llrs.dev/post/2021/12/07/reasons-cran-archivals/index.en_files/figure-html/reasons_top-1.png&#34; width=&#34;864&#34; /&gt;&lt;/p&gt;
&lt;p&gt;The most frequent error is that errors are not corrected or checks, even when there are reminders.&lt;br /&gt;
Next are the packages archived because they depend on other packages already not on CRAN.&lt;br /&gt;
There are some packages that are replaced by others and some maintainers might not want to continue supporting the package when they receive a message from CRAN about fixing an error.&lt;/p&gt;
&lt;p&gt;Policy violation makes to the top 5 but with less than 500 events.
Dependencies problems are the sixth cause, followed by email errors (bouncing, incorrect email…) and then come very sporadic problems about license, not fixing on updates of R, authorship problems or requests from authors.&lt;/p&gt;
&lt;p&gt;Some of these errors happen at the same time for each event, but grouping these reasons together we get a similar table:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
package_not_corrected
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
request_maintainer
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
dependencies
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
other
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
events
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
4366
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
1530
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
767
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
374
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
15
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
13
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
1
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Surprisingly the second most frequent group of archiving actions are due to many different reasons.
This is probably the &lt;a href=&#34;https://en.wikipedia.org/wiki/Pareto_principle&#34;&gt;Pareto’s principle&lt;/a&gt; in action because they are around 15% of the archiving events but the causes are very diverse between them.&lt;/p&gt;
&lt;p&gt;However, if we look at the packages which were archived (not at the request of maintainers), most of them just happen once:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
Events
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
packages
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
1
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
5304
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
594
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
3
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
115
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
4
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
31
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
5
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
8
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This suggests that once a package is archived maintainers do not make the effort to put it back on CRAN except on very few cases were there are multiple attempts.
To check we can see the current available packages and see how many of those are still present on CRAN:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
CRAN
&lt;/th&gt;
&lt;th style=&#34;text-align:right;&#34;&gt;
Packages
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
Proportion
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
no
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
3869
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
64%
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
yes
&lt;/td&gt;
&lt;td style=&#34;text-align:right;&#34;&gt;
2183
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
36%
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Many packages are currently on CRAN despite their past archivation but close to 64% are currently not on CRAN.&lt;/p&gt;
&lt;p&gt;Almost all that are on CRAN have now no &lt;code&gt;X-CRAN-Comment&lt;/code&gt;, except for a few:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
Package
&lt;/th&gt;
&lt;th style=&#34;text-align:left;&#34;&gt;
X-CRAN-Comment
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
geiger
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
&lt;p&gt;Orphaned and corrected on 2022-05-09.&lt;/p&gt;
Repeated notifications about USE_FC_LEN_T were ignored.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
alphahull
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
Versions up to 2.3 have been removed for mirepresentation of authorship.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
udunits2
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
Orphaned on 2022-01-06 as installation problems were not corrected.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
bibtex
&lt;/td&gt;
&lt;td style=&#34;text-align:left;&#34;&gt;
Orphaned and corrected on 2020-09-19 as check problems were not corrected in time.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;CRAN team might have missed these few packages and didn’t move the comments to X-CRAN-history.&lt;/p&gt;
&lt;p&gt;There are some packages that are not archived that don’t have a CRAN-history happens too, but they usually have other fields changed.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id=&#34;discussion&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Discussion&lt;/h1&gt;
&lt;p&gt;Most packages archived on CRAN are due to the maintainers not correcting errors found on the package by CRAN checks.
It is clear that the checks that CRAN help packages to have a high quality but it has high cost on the maintainers and specially on CRAN team.
Maintainers don’t seem to have enough time to fix the issues on time.
And the CRAN team sends personalized reminders to maintainers and sometimes patches to the packages.&lt;/p&gt;
&lt;p&gt;Although the desire to have packages corrected and with no issues is the common goal there are few options on light of these:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Be more restrictive&lt;/p&gt;
&lt;p&gt;Prevent a package to be accepted if it breaks dependencies or archive packages when they fail checks.
This will make it harder to keep packages on CRAN but would lift some pressure on the CRAN team.
This would go against the current on other languages repositories, which often they don’t check the packages/modules and even have less restrictions on dependencies (so it might be an unpopular decision).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be more permissive:&lt;/p&gt;
&lt;p&gt;One option would be to allow for more time for maintainers to fix issues. I haven’t find any report of how long does it take for a package since an error to a fix on CRAN but often it is quite long.
I have seen packages with a warning for months if not years and they weren’t archived from CRAN.&lt;/p&gt;
&lt;p&gt;Maybe if users get a warning on installing packages that a package or one of its dependencies is not clear on all CRAN checks (without error or warnings).
This might help to make users more conscious of their dependencies but this might add pressure to maintainers who already don’t have enough time to fix the problems of their packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide more help or tools to maintainers&lt;/p&gt;
&lt;p&gt;Another option is to provide a mechanism for maintainers to receive help or fix the package.
Currently CRAN requires that new packages that break dependencies to give enough notice in advance to other maintainers to fix their package.
On &lt;a href=&#34;https://stat.ethz.ch/mailman/listinfo/r-package-devel&#34;&gt;R-pkg-devel mailing list&lt;/a&gt; there are often requests for help on submitting and fixing some errors detected by CRAN checks which often result on other maintainers sharing their solutions for the same problem.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There high percentage of packages that once archived do not come back to CRAN might be a good place to start helping maintainers and an opportunity for users to step in and help maintainers of packages they have been using.
There is need for something else? How would that work?&lt;/p&gt;
&lt;p&gt;At the same time it is admirable that after so many years there are few errors on the data.
However, the archival process might be a good process to automate, providing the reason on the webpage and add it to X-CRAN-Comment and moving the comments to X-CRAN-History once it is unarchived.
Knowing more about how these actions are performed by the CRAN team and how the community could help on the process will be beneficial to all.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This blog was updated on 2022/01/02 to improve the parsing of actions and dates on packages. Resulting on a change on the first plot to include unarchived which slightly modified the second plot of reasons why packages are archived. This overall only affected the numbers of the plots not the conclusions or discussion.&lt;/p&gt;
&lt;div id=&#34;reproducibility&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Reproducibility&lt;/h3&gt;
&lt;details&gt;
&lt;pre&gt;&lt;code&gt;## ─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────
##  setting  value
##  version  R version 4.2.0 (2022-04-22)
##  os       Ubuntu 20.04.4 LTS
##  system   x86_64, linux-gnu
##  ui       X11
##  language (EN)
##  collate  en_US.UTF-8
##  ctype    en_US.UTF-8
##  tz       Europe/Madrid
##  date     2022-05-09
##  pandoc   2.17.1.1 @ /usr/lib/rstudio/bin/quarto/bin/ (via rmarkdown)
## 
## ─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
##  package      * version date (UTC) lib source
##  assertthat     0.2.1   2019-03-21 [1] CRAN (R 4.2.0)
##  blogdown       1.9     2022-03-28 [1] CRAN (R 4.2.0)
##  bookdown       0.26    2022-04-15 [1] CRAN (R 4.2.0)
##  bslib          0.3.1   2021-10-06 [1] CRAN (R 4.2.0)
##  cli            3.3.0   2022-04-25 [1] CRAN (R 4.2.0)
##  colorspace     2.0-3   2022-02-21 [1] CRAN (R 4.2.0)
##  ComplexUpset * 1.3.3   2021-12-11 [1] CRAN (R 4.2.0)
##  crayon         1.5.1   2022-03-26 [1] CRAN (R 4.2.0)
##  DBI            1.1.2   2021-12-20 [1] CRAN (R 4.2.0)
##  digest         0.6.29  2021-12-01 [1] CRAN (R 4.2.0)
##  dplyr        * 1.0.9   2022-04-28 [1] CRAN (R 4.2.0)
##  ellipsis       0.3.2   2021-04-29 [1] CRAN (R 4.2.0)
##  evaluate       0.15    2022-02-18 [1] CRAN (R 4.2.0)
##  fansi          1.0.3   2022-03-24 [1] CRAN (R 4.2.0)
##  farver         2.1.0   2021-02-28 [1] CRAN (R 4.2.0)
##  fastmap        1.1.0   2021-01-25 [1] CRAN (R 4.2.0)
##  generics       0.1.2   2022-01-31 [1] CRAN (R 4.2.0)
##  ggplot2      * 3.3.6   2022-05-03 [1] CRAN (R 4.2.0)
##  glue           1.6.2   2022-02-24 [1] CRAN (R 4.2.0)
##  gtable         0.3.0   2019-03-25 [1] CRAN (R 4.2.0)
##  highr          0.9     2021-04-16 [1] CRAN (R 4.2.0)
##  htmltools      0.5.2   2021-08-25 [1] CRAN (R 4.2.0)
##  jquerylib      0.1.4   2021-04-26 [1] CRAN (R 4.2.0)
##  jsonlite       1.8.0   2022-02-22 [1] CRAN (R 4.2.0)
##  knitr          1.39    2022-04-26 [1] CRAN (R 4.2.0)
##  labeling       0.4.2   2020-10-20 [1] CRAN (R 4.2.0)
##  lifecycle      1.0.1   2021-09-24 [1] CRAN (R 4.2.0)
##  magrittr       2.0.3   2022-03-30 [1] CRAN (R 4.2.0)
##  munsell        0.5.0   2018-06-12 [1] CRAN (R 4.2.0)
##  patchwork      1.1.1   2020-12-17 [1] CRAN (R 4.2.0)
##  pillar         1.7.0   2022-02-01 [1] CRAN (R 4.2.0)
##  pkgconfig      2.0.3   2019-09-22 [1] CRAN (R 4.2.0)
##  purrr          0.3.4   2020-04-17 [1] CRAN (R 4.2.0)
##  R6             2.5.1   2021-08-19 [1] CRAN (R 4.2.0)
##  rlang          1.0.2   2022-03-04 [1] CRAN (R 4.2.0)
##  rmarkdown      2.14    2022-04-25 [1] CRAN (R 4.2.0)
##  rstudioapi     0.13    2020-11-12 [1] CRAN (R 4.2.0)
##  sass           0.4.1   2022-03-23 [1] CRAN (R 4.2.0)
##  scales         1.2.0   2022-04-13 [1] CRAN (R 4.2.0)
##  sessioninfo    1.2.2   2021-12-06 [1] CRAN (R 4.2.0)
##  stringi        1.7.6   2021-11-29 [1] CRAN (R 4.2.0)
##  stringr        1.4.0   2019-02-10 [1] CRAN (R 4.2.0)
##  tibble         3.1.7   2022-05-03 [1] CRAN (R 4.2.0)
##  tidyselect     1.1.2   2022-02-21 [1] CRAN (R 4.2.0)
##  utf8           1.2.2   2021-07-24 [1] CRAN (R 4.2.0)
##  vctrs          0.4.1   2022-04-13 [1] CRAN (R 4.2.0)
##  withr          2.5.0   2022-03-03 [1] CRAN (R 4.2.0)
##  xfun           0.30    2022-03-02 [1] CRAN (R 4.2.0)
##  yaml           2.3.5   2022-02-21 [1] CRAN (R 4.2.0)
## 
##  [1] /home/lluis/bin/R/4.2.0/lib/R/library
## 
## ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────&lt;/code&gt;&lt;/pre&gt;
&lt;/details&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
    </item>
    
  </channel>
</rss>
