Skip to content

Commit

Permalink
Update "master" references. (#782)
Browse files Browse the repository at this point in the history
  • Loading branch information
jonthegeek authored Nov 29, 2024
1 parent afde3f8 commit 6d709f7
Show file tree
Hide file tree
Showing 314 changed files with 648 additions and 648 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/dataset_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ about: Suggest a new dataset for Tidy Tuesday
labels: dataset
---

Please consider submitting this dataset as a pull request! See https://github.com/rfordatascience/tidytuesday/blob/master/.github/pr_instructions.md to learn how.
Please consider submitting this dataset as a pull request! See https://github.com/rfordatascience/tidytuesday/blob/main/.github/pr_instructions.md to learn how.
Please fill out as much of this information as you can!

- [ ] This dataset has not already been used in TidyTuesday.
Expand Down
2 changes: 1 addition & 1 deletion .github/pr_instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ A "pull request" is a submission of code to a git repository. If you have never

We use a fork/branch approach to pull requests, meaning you'll create a version of the repo specifically for your changes, and then ask us to merge those changes into the main tidytuesday repository.

1. If you are on anything other than the `master` branch of your local repository, switch back to master. In R, you can use `usethis::pr_pause()` (if your previous submission is still pending), or `usethis::pr_finish()` (if we've accepted your submission).
1. If you are on anything other than the `main` branch of your local repository, switch back to main. In R, you can use `usethis::pr_pause()` (if your previous submission is still pending), or `usethis::pr_finish()` (if we've accepted your submission).

2. Pull the latest version of the repository to your computer. In R, use `usethis::pr_merge_main()`

Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-09-18/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@ We have a double data-set for this week!

Nathan is a glider pilot, and when gliders go above 14,000 feet, the pilot is required to have a supplemental oxygen source. The magazine of the Soaring Society of America (SSA), Soaring, recently published an article about the lack of oxygen and/or carbon dioxide during flight, and the table caught his eye. He received permission from the author and editor to post the article and "crowdsource" different means of presenting the data, which could include alternative tabular representations or other more visual means.

Here is the article in [full.](https://github.com/rfordatascience/tidytuesday/files/2343596/Hypoxia.Article.proof.pdf)<br/> [Table 1](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-09-18/hypoxia.csv) has the useful information.
Here is the article in [full.](https://github.com/rfordatascience/tidytuesday/files/2343596/Hypoxia.Article.proof.pdf)<br/> [Table 1](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-09-18/hypoxia.csv) has the useful information.

The author, the editor, and I are very interested in the products of everyone's imaginations! SSA is a non-profit, the author was not paid for his work, and the table originated from Guyton & Hall: Textbook of Medical Physiology, 12th ed. Attribution is all that is requested.
2 changes: 1 addition & 1 deletion data/2018/2018-09-25/raw/readme.rmd
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Table data extracted from supplementary PDF via [Tabula](https://tabula.technolo

This ended up being super messy - cleaning script found below.

[Cleaning Script](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-09-25/raw/invasive_species.R)
[Cleaning Script](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-09-25/raw/invasive_species.R)
2 changes: 1 addition & 1 deletion data/2018/2018-10-16/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The data behind the story [The Economic Guide To Picking A College Major](https://fivethirtyeight.com/features/the-economic-guide-to-picking-a-college-major/).

[Raw data](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-10-16/recent-grads.csv)
[Raw data](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-10-16/recent-grads.csv)

All data is from American Community Survey 2010-2012 Public Use Microdata Series.

Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-10-23/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Week 30 - Horror Movies and Profit

[raw data](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-10-23/movie_profit.csv)
[raw data](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-10-23/movie_profit.csv)

## [Scary Movies Are The Best Investment In Hollywood](https://fivethirtyeight.com/features/scary-movies-are-the-best-investment-in-hollywood/) - FiveThirtyEight

Expand Down
4 changes: 2 additions & 2 deletions data/2018/2018-10-30/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Anonymized package and R language downloads from the RStudio CRAN mirror.

* [`r-downloads.csv`](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-10-30/r-downloads.csv) - R language downloads from RStudio CRAN mirror on last TidyTuesday for October 23, 2018.
* [`r_downloads_year.csv`](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-10-30/r_downloads_year.csv) - A year's worth of R language downloads from RStudio CRAN mirror between October 20, 2017 and October 20, 2018.
* [`r-downloads.csv`](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-10-30/r-downloads.csv) - R language downloads from RStudio CRAN mirror on last TidyTuesday for October 23, 2018.
* [`r_downloads_year.csv`](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-10-30/r_downloads_year.csv) - A year's worth of R language downloads from RStudio CRAN mirror between October 20, 2017 and October 20, 2018.

## Data Dictionary
Header | Description
Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-11-06/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# US Wind Turbine Data

Wind turbine location and characteristic data across the USA can be found [here](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-06/us_wind.csv).
Wind turbine location and characteristic data across the USA can be found [here](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-06/us_wind.csv).

Some potential questions:
- How do newer installations compare to older turbines?
Expand Down
6 changes: 3 additions & 3 deletions data/2018/2018-11-13/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ A lot of it is related to mapping, feel free to dive in and participate in their
Alternatively, if you'd rather work with some simple aggregated malaria data from [Our World in Data](https://ourworldindata.org/malaria), you can see many different summary-level datasets related to malaria incidence by region, age, or time.

__3 Datasets:__
* [`malaria_inc.csv`](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-13/malaria_inc.csv) - Malaria incidence by country for all ages across the world across time
* [`malaria_deaths.csv`](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-13/malaria_deaths.csv) - Malaria deaths by country for all ages across the world and time.
* [`malaria_deaths_age.csv`](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-13/malaria_deaths_age.csv) - Malaria deaths by age across the world and time.
* [`malaria_inc.csv`](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-13/malaria_inc.csv) - Malaria incidence by country for all ages across the world across time
* [`malaria_deaths.csv`](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-13/malaria_deaths.csv) - Malaria deaths by country for all ages across the world and time.
* [`malaria_deaths_age.csv`](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-13/malaria_deaths_age.csv) - Malaria deaths by age across the world and time.

2 changes: 1 addition & 1 deletion data/2018/2018-11-20/readme.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# FiveThirtyEight Thanksgiving Dinner or [Transgender Day of Rembrance (TDoR)](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-20/TDoR_readme.md)
# FiveThirtyEight Thanksgiving Dinner or [Transgender Day of Rembrance (TDoR)](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-20/TDoR_readme.md)

Data originally from [fivethirtyeight](https://github.com/fivethirtyeight/data/tree/master/thanksgiving-2015).

Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-11-27/readme.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Maryland Bridges](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-11-27/baltimore_bridges.csv)
# [Maryland Bridges](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-11-27/baltimore_bridges.csv)

Many thanks to [Sara Stoudt](https://twitter.com/sastoudt) for submitting this dataset as to our GitHub!

Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-12-04/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Medium Data Science articles

This week's [dataset](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-12-04/medium_datasci.csv) was submitted by [Matthew Hendrickson](https://twitter.com/mjhendrickson), thanks! Also credit to [Kanishka Misra](https://twitter.com/iamasharkskin) who wanted to work with some text-based data via the [tidytext package](https://github.com/juliasilge/tidytext).
This week's [dataset](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-12-04/medium_datasci.csv) was submitted by [Matthew Hendrickson](https://twitter.com/mjhendrickson), thanks! Also credit to [Kanishka Misra](https://twitter.com/iamasharkskin) who wanted to work with some text-based data via the [tidytext package](https://github.com/juliasilge/tidytext).

## Data

Expand Down
2 changes: 1 addition & 1 deletion data/2018/2018-12-11/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This week's data is from the New York [Open Data Portal](https://data.cityofnewyork.us/Health/DOHMH-New-York-City-Restaurant-Inspection-Results/43nn-pn8j/data).

As the dataset is >100 MB (GitHub only allows 100 MB), I uploaded a [data selection](https://github.com/rfordatascience/tidytuesday/blob/master/data/2018-12-11/nyc_restaurants.csv) of 300,000 records sampled at random from the original dataset with `sample_n()`. You could read in the original dataset by using `read_csv` on the link as seen below.
As the dataset is >100 MB (GitHub only allows 100 MB), I uploaded a [data selection](https://github.com/rfordatascience/tidytuesday/blob/main/data/2018-12-11/nyc_restaurants.csv) of 300,000 records sampled at random from the original dataset with `sample_n()`. You could read in the original dataset by using `read_csv` on the link as seen below.

```
library(tidyverse)
Expand Down
6 changes: 3 additions & 3 deletions data/2019/2019-01-01/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ Data for this week comes from [Mike Kearney](https://twitter.com/kearneymw) - au

## Datasets

[`#rstats`](https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-01-01/rstats_tweets.rds)
[`#TidyTuesday`](https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-01-01/tidytuesday_tweets.rds)
[`#rstats`](https://github.com/rfordatascience/tidytuesday/tree/main/data/2019/2019-01-01/rstats_tweets.rds)
[`#TidyTuesday`](https://github.com/rfordatascience/tidytuesday/tree/main/data/2019/2019-01-01/tidytuesday_tweets.rds)

Just a heads up, there are A LOT of columns (88!) in this collection - feel free to select whichever are useful for your analysis or interest! Both datasets have the same column types which can be seen below.

Expand Down Expand Up @@ -98,4 +98,4 @@ Just a heads up, there are A LOT of columns (88!) in this collection - feel free
|account_lang |character |
|profile_banner_url |character |
|profile_background_url |character |
|profile_image_url |character |
|profile_image_url |character |
6 changes: 3 additions & 3 deletions data/2019/2019-02-05/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ Lastly, we have included some recession dates in the US - the Great Recession (2
or read the data directly into R!

```
state_hpi <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-05/state_hpi.csv")
mortgage_rates <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-05/mortgage.csv")
recession_dates <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-05/recessions.csv")
state_hpi <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-05/state_hpi.csv")
mortgage_rates <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-05/mortgage.csv")
recession_dates <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-05/recessions.csv")
```

# Data Dictionary
Expand Down
8 changes: 4 additions & 4 deletions data/2019/2019-02-12/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ Data comes directly from the American Association for the Advancement of Science
or read the data directly into R!

```
fed_rd <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-12/fed_r_d_spending.csv")
energy_spend <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-12/energy_spending.csv")
climate_spend <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-12/climate_spending.csv")
fed_rd <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-12/fed_r_d_spending.csv")
energy_spend <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-12/energy_spending.csv")
climate_spend <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-12/climate_spending.csv")
```

# Data Dictionary
Expand Down Expand Up @@ -145,4 +145,4 @@ gcc_clean <- gcc_raw %>%
gcc_spending = as.numeric(gcc_spending),
gcc_spending = gcc_spending * 1*10^6)
```
```
2 changes: 1 addition & 1 deletion data/2019/2019-02-19/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Alternatively - I have cleaned the data for you and saved as a `.csv` for you to
To get at the details for broad or major fields, `dplyr::summarize` is your friend!

```{r}
phd_field <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-19/phd_by_field.csv")
phd_field <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-19/phd_by_field.csv")
```

### Data Dictionary
Expand Down
4 changes: 2 additions & 2 deletions data/2019/2019-02-26/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ Lastly, if for some reason you'd like to see the raw untranslated dataset it is
### Grab the raw data here

```{r}
full_trains <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-26/full_trains.csv")
small_trains <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-26/small_trains.csv")
full_trains <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-26/full_trains.csv")
small_trains <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-02-26/small_trains.csv")
```

</br>
Expand Down
6 changes: 3 additions & 3 deletions data/2019/2019-03-05/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ Data Scientist and Austin #Rladies co-organizer [Caitlin Hudon](https://twitter.
### Grab the clean data here

```{r}
jobs_gender <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-03-05/jobs_gender.csv")
earnings_female <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-03-05/earnings_female.csv")
employed_gender <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-03-05/employed_gender.csv")
jobs_gender <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-03-05/jobs_gender.csv")
earnings_female <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-03-05/earnings_female.csv")
employed_gender <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-03-05/employed_gender.csv")
```

The original data came primarily from .xlsx sheets - I do **NOT** recommend cleaning them as an exercise - there are some major gotchas that are less than enjoyable. However I have uploaded the raw data and how I cleaned it if you are interested in taking a look. As a summary table the major and minor categories are indicated by indent but this doesn't translate nicely to either conversion to .csv or being read in directly as a .xlsx file.
Expand Down
2 changes: 1 addition & 1 deletion data/2019/2019-03-12/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ This week's data comes from the [Board Game Geek](http://boardgamegeek.com/) dat
To follow along with a [fivethirtyeight article](https://fivethirtyeight.com/features/designing-the-best-board-game-on-the-planet/), I limited to only games with at least 50 ratings and for games between 1950 and 2016. This still leaves us with 10,532 games!

```{r}
board_games <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-03-12/board_games.csv")
board_games <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-03-12/board_games.csv")
```

### Data Dictionary
Expand Down
2 changes: 1 addition & 1 deletion data/2019/2019-03-26/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ A few articles examined the most popular pet names in 2018, one from [Seattle](h
## Get the data!

```
seattle_pets <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-03-26/seattle_pets.csv")
seattle_pets <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-03-26/seattle_pets.csv")
```

### Data Dictionary
Expand Down
2 changes: 1 addition & 1 deletion data/2019/2019-04-02/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Seattle Times recently covered [What we can learn from Seattle's bike-counte
# Get the Data!

```
bike_traffic <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-02/bike_traffic.csv")
bike_traffic <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-02/bike_traffic.csv")
```

### Data Dictionary
Expand Down
6 changes: 3 additions & 3 deletions data/2019/2019-04-09/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ Additionally a [gist](https://gist.github.com/johnburnmurdoch/bd20db77b258203160
# Get the Data!

```
player_dob <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-09/player_dob.csv")
player_dob <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-09/player_dob.csv")
grand_slams <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-09/grand_slams.csv")
grand_slams <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-09/grand_slams.csv")
grand_slam_timeline <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-09/grand_slam_timeline.csv")
grand_slam_timeline <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-09/grand_slam_timeline.csv")
```

# Data Dictionaries
Expand Down
14 changes: 7 additions & 7 deletions data/2019/2019-04-16/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,19 @@ She was nice enough to include the raw data as .csv files, where I have included
## Get the data!

```
brexit <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/brexit.csv")
brexit <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/brexit.csv")
corbyn <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/corbyn.csv")
corbyn <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/corbyn.csv")
dogs <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/dogs.csv")
dogs <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/dogs.csv")
eu_balance <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/eu_balance.csv")
eu_balance <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/eu_balance.csv")
pensions <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/pensions.csv")
pensions <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/pensions.csv")
trade <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/trade.csv")
trade <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/trade.csv")
women_research <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-04-16/women_research.csv")
women_research <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2019/2019-04-16/women_research.csv")
```


Expand Down
Loading

0 comments on commit 6d709f7

Please sign in to comment.