Skip to content

Commit

Permalink
Merge remote-tracking branch 'IQSS/develop' into 11262-copy-labels
Browse files Browse the repository at this point in the history
  • Loading branch information
qqmyers committed Feb 20, 2025
2 parents ab9d7ba + 2210d16 commit 32a9096
Show file tree
Hide file tree
Showing 13 changed files with 327 additions and 37 deletions.
1 change: 1 addition & 0 deletions doc/release-notes/10541-root-alias-name2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
The [tutorial](https://dataverse-guide--11201.org.readthedocs.build/en/11201/container/running/demo.html#root-collection-customization-alias-name-etc) on running Dataverse in Docker has been updated to explain how to configure the root collection using a JSON file. See also #10541 and #11201.
8 changes: 8 additions & 0 deletions doc/release-notes/11178-bug-fix-sort-by-newest-first.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
### Bug fix: Sorting by "newest first"

Fixed an issue where draft versions of datasets were sorted using the release timestamp of their most recent major version.
This caused newer drafts to appear incorrectly alongside their corresponding major version, instead of at the top, when sorted by "newest first".
Sorting now uses the last update timestamp when sorting draft datasets.
The sorting behavior of published dataset versions (major and minor) is unchanged.

**Upgrade instructions**: draft datasets must be reindexed for this fix to take effect.
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
### News on Support for External Vocabulary Services

It is now possible to fill Dataverse keywords metadata using an OntoPortal service.
The code has been shared on [GDCC GitHub Repository](https://github.com/gdcc/dataverse-external-vocab-support#scripts-in-production).
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
When a dataset has a long running lock, including when it is 'in review', Dataverse will now slow the page refresh rate over time.
27 changes: 27 additions & 0 deletions doc/sphinx-guides/source/container/running/demo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ To stop the containers hit ``Ctrl-c`` (hold down the ``Ctrl`` key and then hit t

To start the containers, run ``docker compose up``.

.. _starting-over:

Deleting Data and Starting Over
-------------------------------

Expand All @@ -46,6 +48,8 @@ Starting Fresh

For this exercise, please start fresh by stopping all containers and removing the ``data`` directory.

.. _demo-persona:

Creating and Running a Demo Persona
+++++++++++++++++++++++++++++++++++

Expand Down Expand Up @@ -137,6 +141,29 @@ In the example below of configuring :ref:`:FooterCopyright` we use the default u

One you make this change it should be visible in the copyright in the bottom left of every page.

Root Collection Customization (Alias, Name, etc.)
+++++++++++++++++++++++++++++++++++++++++++++++++

Before running ``docker compose up`` for the first time, you can customize the root collection by placing a JSON file in the right place.

First, in the "demo" directory you created (see :ref:`demo-persona`), create a subdirectory called "config":

``mkdir demo/config``

Next, download :download:`dataverse-complete.json <../../_static/api/dataverse-complete.json>` and put it in the "config" directory you just created. The contents of your "demo" directory should look something like this:

.. code-block:: bash
% find demo
demo
demo/config
demo/config/dataverse-complete.json
demo/init.sh
Edit ``dataverse-complete.json`` to have the values you want. You'll want to refer to :ref:`update-dataverse-api` in the API Guide to understand the format. In that documentation you can find optional parameters as well.

To test your JSON file, run ``docker compose up``. Again, this only works when you are running ``docker compose up`` for the first time. (You can always start over. See :ref:`starting-over`.)

Multiple Languages
++++++++++++++++++

Expand Down
10 changes: 10 additions & 0 deletions modules/container-configbaker/scripts/bootstrap/demo/init.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,16 @@ echo ""
echo "Setting DOI provider to \"FAKE\"..."
curl -sS -X PUT -d FAKE "${DATAVERSE_URL}/api/admin/settings/:DoiProvider"

API_TOKEN=$(grep apiToken "/tmp/setup-all.sh.out" | jq ".data.apiToken" | tr -d \")
export API_TOKEN

ROOT_COLLECTION_JSON=/scripts/bootstrap/demo/config/dataverse-complete.json
if [ -f $ROOT_COLLECTION_JSON ]; then
echo ""
echo "Updating root collection based on $ROOT_COLLECTION_JSON..."
curl -sS -X PUT -H "X-Dataverse-key:$API_TOKEN" "$DATAVERSE_URL/api/dataverses/:root" --upload-file $ROOT_COLLECTION_JSON
fi

echo ""
echo "Revoke the key that allows for creation of builtin users..."
curl -sS -X DELETE "${DATAVERSE_URL}/api/admin/settings/BuiltinUsers.KEY"
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,7 @@
<dependency>
<groupId>com.nimbusds</groupId>
<artifactId>oauth2-oidc-sdk</artifactId>
<version>10.13.2</version>
<version>11.22.1</version>
</dependency>
<!-- Caching library, current main use case is for OIDC authentication -->
<dependency>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import edu.harvard.iq.dataverse.*;
import edu.harvard.iq.dataverse.authorization.AuthenticationServiceBean;
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.search.IndexServiceBean;
import edu.harvard.iq.dataverse.util.BundleUtil;

import java.sql.Timestamp;
Expand Down Expand Up @@ -60,6 +61,7 @@ public class IngestMessageBean implements MessageListener {
@EJB IngestServiceBean ingestService;
@EJB UserNotificationServiceBean userNotificationService;
@EJB AuthenticationServiceBean authenticationServiceBean;
@EJB IndexServiceBean indexService;


public IngestMessageBean() {
Expand Down Expand Up @@ -111,6 +113,7 @@ public void onMessage(Message message) {
// and "mixed success and failure" emails. Now we never list successfully
// ingested files so this line is commented out.
// sbIngestedFiles.append(String.format("<li>%s</li>", datafile.getCurrentName()));
indexService.asyncIndexDataset(datafile.getOwner(), true);
} else {
logger.warning("Error occurred during ingest job for file id " + datafile_id + "!");
sbIngestedFiles.append(String.format("<li>%s</li>", datafile.getCurrentName()));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,6 @@ public class XmlMetadataTemplate {
public static final String XML_SCHEMA_VERSION = "4.5";

private DoiMetadata doiMetadata;
//QDR - used to get ROR name from ExternalVocabularyValue via pidProvider.get
private PidProvider pidProvider = null;

public XmlMetadataTemplate() {
}
Expand Down Expand Up @@ -98,13 +96,6 @@ private void generateXML(DvObject dvObject, OutputStream outputStream) throws XM
String language = null; // machine locale? e.g. for Publisher which is global
String metadataLanguage = null; // when set, otherwise = language?

//QDR - used to get ROR name from ExternalVocabularyValue via pidProvider.get
GlobalId pid = null;
pid = dvObject.getGlobalId();
if ((pid == null) && (dvObject instanceof DataFile df)) {
pid = df.getOwner().getGlobalId();
}
pidProvider = PidUtil.getPidProvider(pid.getProviderId());
XMLStreamWriter xmlw = XMLOutputFactory.newInstance().createXMLStreamWriter(outputStream);
xmlw.writeStartElement("resource");
boolean deaccessioned=false;
Expand Down
57 changes: 33 additions & 24 deletions src/main/java/edu/harvard/iq/dataverse/search/IndexServiceBean.java
Original file line number Diff line number Diff line change
Expand Up @@ -947,32 +947,36 @@ public SolrInputDocuments toSolrDocs(IndexableDataset indexableDataset, Set<Long
solrInputDocument.addField(SearchFields.CATEGORY_OF_DATAVERSE, dvIndexableCategoryName);
solrInputDocument.addField(SearchFields.IDENTIFIER_OF_DATAVERSE, dvAlias);
solrInputDocument.addField(SearchFields.DATAVERSE_NAME, dvDisplayName);

Date datasetSortByDate = new Date();
Date majorVersionReleaseDate = dataset.getMostRecentMajorVersionReleaseDate();
if (majorVersionReleaseDate != null) {
if (true) {
String msg = "major release date found: " + majorVersionReleaseDate.toString();
logger.fine(msg);

Date datasetSortByDate;
// For now, drafts are indexed using to their last update time, and published versions are indexed using their
// most recent major version release date.
// This means that newly created or edited drafts will show up on the top when sorting by newest, newly
// published major versions will also show up on the top, and newly published minor versions will be shown
// next to their corresponding major version.
if (state.equals(DatasetState.WORKING_COPY)) {
Date lastUpdateTime = indexableDataset.getDatasetVersion().getLastUpdateTime();
if (lastUpdateTime != null) {
logger.fine("using last update time of indexed dataset version: " + lastUpdateTime);
datasetSortByDate = lastUpdateTime;
} else {
logger.fine("can't find last update time, using \"now\"");
datasetSortByDate = new Date();
}
datasetSortByDate = majorVersionReleaseDate;
} else {
if (indexableDataset.getDatasetState().equals(IndexableDataset.DatasetState.WORKING_COPY)) {
solrInputDocument.addField(SearchFields.PUBLICATION_STATUS, UNPUBLISHED_STRING);
} else if (indexableDataset.getDatasetState().equals(IndexableDataset.DatasetState.DEACCESSIONED)) {
solrInputDocument.addField(SearchFields.PUBLICATION_STATUS, DEACCESSIONED_STRING);
}
Date createDate = dataset.getCreateDate();
if (createDate != null) {
if (true) {
String msg = "can't find major release date, using create date: " + createDate;
logger.fine(msg);
}
datasetSortByDate = createDate;
Date majorVersionReleaseDate = dataset.getMostRecentMajorVersionReleaseDate();
if (majorVersionReleaseDate != null) {
logger.fine("major release date found: " + majorVersionReleaseDate.toString());
datasetSortByDate = majorVersionReleaseDate;
} else {
String msg = "can't find major release date or create date, using \"now\"";
logger.info(msg);
datasetSortByDate = new Date();
Date createDate = dataset.getCreateDate();
if (createDate != null) {
logger.fine("can't find major release date, using create date: " + createDate);
datasetSortByDate = createDate;
} else {
logger.fine("can't find major release date or create date, using \"now\"");
datasetSortByDate = new Date();
}
}
}
solrInputDocument.addField(SearchFields.RELEASE_OR_CREATE_DATE, datasetSortByDate);
Expand All @@ -985,7 +989,12 @@ public SolrInputDocuments toSolrDocs(IndexableDataset indexableDataset, Set<Long
// solrInputDocument.addField(SearchFields.RELEASE_OR_CREATE_DATE,
// dataset.getPublicationDate());
} else if (state.equals(DatasetState.WORKING_COPY)) {
if (dataset.getReleasedVersion() == null) {
solrInputDocument.addField(SearchFields.PUBLICATION_STATUS, UNPUBLISHED_STRING);
}
solrInputDocument.addField(SearchFields.PUBLICATION_STATUS, DRAFT_STRING);
} else if (state.equals(IndexableDataset.DatasetState.DEACCESSIONED)) {
solrInputDocument.addField(SearchFields.PUBLICATION_STATUS, DEACCESSIONED_STRING);
}

addDatasetReleaseDateToSolrDoc(solrInputDocument, dataset);
Expand Down Expand Up @@ -1588,7 +1597,7 @@ public SolrInputDocuments toSolrDocs(IndexableDataset indexableDataset, Set<Long
}
datafileSolrInputDocument.addField(SearchFields.RELEASE_OR_CREATE_DATE, fileSortByDate);

if (majorVersionReleaseDate == null && !datafile.isHarvested()) {
if (dataset.getReleasedVersion() == null && !datafile.isHarvested()) {
datafileSolrInputDocument.addField(SearchFields.PUBLICATION_STATUS, UNPUBLISHED_STRING);
}

Expand Down
15 changes: 12 additions & 3 deletions src/main/webapp/dataset.xhtml
Original file line number Diff line number Diff line change
Expand Up @@ -858,6 +858,11 @@
$(this).ready(function () {
refreshIfStillLocked();
});

var initialInterval = 5000; // 5 seconds
var maxInterval = 300000; // 5 minutes
var currentInterval = initialInterval;
var backoffFactor = 1.2; // Exponential factor
function refreshIfStillLocked() {
if ($('input[id$="datasetLockedForAnyReasonVariable"]').val() === 'true') {
// if dataset is locked, instruct the page to
Expand All @@ -882,18 +887,22 @@
$('button[id$="refreshButton"]').trigger('click');
//refreshAllCommand();
}, 1500);
} else {
// Reset the interval if the dataset is unlocked
currentInterval = initialInterval;
}
}
}

function waitAndCheckLockAgain() {
setTimeout(function () {
// refresh the lock in the
// backing bean; i.e., check, if the ingest has
// already completed in the background:
//$('button[id$="refreshButton"]').trigger('click');
//refreshLockCommand();
refreshAllLocksCommand();
}, 10000);
// Increase the interval exponentially for the next check
currentInterval = Math.min((currentInterval * backoffFactor) + 2, maxInterval);
}, currentInterval);
}
//]]>
</script>
Expand Down
Loading

0 comments on commit 32a9096

Please sign in to comment.