Skip to content

Commit

Permalink
typos
Browse files Browse the repository at this point in the history
  • Loading branch information
Franck Michel committed May 22, 2020
1 parent 6894246 commit 9087015
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion src/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This folder provides various tools, scripts and mappings files involved in carry
In both cases, one result JSON file is produced per article from the CORD-19 corpus processed.
Directory [mongo](mongo) provides the scrips used to import these different sets of JSON files into MongoDB, and pre-process them with MongoDB aggregation queries to clean up the data and prepare prepare the JSON format for the next stage.

### CORD-19 loading and prep-processing
### CORD-19 loading and pre-processing

In addition to the extraction of named entities and arguements from the CORD-19 corpus, we also need the source files to generate the artcicles metadata in RDF.

Expand Down
2 changes: 1 addition & 1 deletion src/acta/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Install the requirements in a Python 3.7 environment:
$ pip install -r requirements.txt
```

Download and extract the models into the model directory. You can download the pre-trained models[here](https://covid19.i3s.unice.fr/~team/acta_models.zip).
Download and extract the models into the model directory. You can download the pre-trained models [here](https://covid19.i3s.unice.fr/~team/acta_models.zip).


Run the pipeline:
Expand Down

0 comments on commit 9087015

Please sign in to comment.