A workshop to seek interdisciplinary expert perspectives on ethically and visually representing the historical place of misrepresented peoples and locales.
Guiding question
Which inputs and outputs are critical and optimal? What are the requirements for proper software maintenance and long- term archiving?
Considerations
Open and/or widely-used data formats [TXT, CSV, XML, PDF, PNG], consuming the dataset, research outcome formats, interoperability, maintenance costs, data longevity
Goal
Necessary and preferred data formats, importation, and exports, software maintenance and management plan
Discussants
Teresa Schultz (lead) & Doris Kosminsky
The discussion centered around currently feasible and standard practices in maintenance and archiving. Currently, every major data repository is using Amazon Web Services (AWS) for data storage, which presents issues with respect to corporate longevity as no repositories currently implement decentralized storage such as SWARM. To be accessible after archiving, data must be FAIR: findable, accessible, interoperable, and reusable. Our options for storing and sharing data include UNR’s ScholarWorks, which is expensive and very prescriptive, Zenodo, which gives a DOI to any contribution but only stores zip archives, Dataverse, and ICPSR. For preserving the visualization and its accompanying code, Binderhub is most suitable and better than Docker for preserving the environment of a platform.
We resolved to give the project a DOI via Zenodo storage.