LexBib bibliodata workflow overview: Difference between revisions
From LexBib
No edit summary |
|||
(38 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
* LexBib [[Project:About|About page]] contains reference to written publications and presentations about LexBib and Elexifinder. | * LexBib [[Project:About|About page]] contains reference to written publications and presentations about LexBib and Elexifinder. | ||
=Collection of bibliodata | =Zotero= | ||
==Collection of bibliodata== | |||
* All bibliodata is stored in [[LexBib Zotero]], which is a "group" on the Zotero platform. The group | * All bibliodata is stored in [[LexBib Zotero]], which is a "group library" on the Zotero platform. The group library is public, but item attachments (PDF, TXT) are restricted to registered group members (project members only). | ||
* | * For scraping publication metadata from web pages (e.g. article 'landing pages' in journal or publisher portals), the Zotero software includes so-called [https://www.zotero.org/support/translators translators], which ingest bibliodata as single items or in batches. Zotero will also try to harvest the PDF. If it finds a PDF, it also produces a TXT version. | ||
* We transform | * We transform bibliodata that reaches us as tabular data to [https://en.wikipedia.org/wiki/RIS_(file_format) RIS format], with [https://github.com/elexis-eu/elexifinder/tree/master/BibDataConverters own converters]. RIS is straightforwardly imported by Zotero, and, if needed, exported, manipulated using regular expressions, and re-imported. | ||
* We can update the Zotero database using the Zotero API. For example, we can update author first and last names according to their preferred form in LexBib wikibase. | |||
==Manual curation== | ==Manual curation== | ||
* | * The editing team uses [https://www.zotero.org/groups/ Zotero group synchronization] ([https://lexbib.org/blog/getting-started-with-zotero/ tutorial]). | ||
* Every item is annotated with the first author's location. An English Wikipedia page URL (as unambiguous identifier) is placed in the Zotero "extra" field. zotexport.py (see below) maps that to the corresponding LexBib place item ([https://lexbib.org/blog/author-and-article-location-tutorial/ tutorial]). | * Completeness of publication metadata is manually checked. | ||
* Every item is annotated with the first author's location; the location of the first author is a requirement for the dataset to be exported to [[Elexifinder]]. An English Wikipedia page URL (as unambiguous identifier) is placed in the Zotero "extra" field. zotexport.py (see below) maps that to the corresponding LexBib place item ([https://lexbib.org/blog/author-and-article-location-tutorial/ tutorial]). | |||
* The Zotero "language" field (publication language) must contain a two-letter ISO-639-1, or a three-letter ISO-639-3 language code. | * The Zotero "language" field (publication language) must contain a two-letter ISO-639-1, or a three-letter ISO-639-3 language code. | ||
* In the sources, person names (author, editor) are often disordered or incomplete. We try to validate correct name forms already in this stage. A disambiguation proper (with unambiguous ID) is not possible in Zotero. | * In the sources, person names (author, editor) are often disordered or incomplete. We try to validate correct name forms already in this stage. A disambiguation proper (with unambiguous ID) is not possible in Zotero. | ||
* Items are annotated with Zotero tags that contain shortcodes, which are interpreted by zotexport.py. The shortcodes point either to LexBib wikibase items (Q-ID), or to pre-defined values: | * Items are annotated with Zotero tags that contain shortcodes, which are interpreted by zotexport.py. The shortcodes point either to LexBib wikibase items (Q-ID), or to pre-defined values: | ||
** '':container Qxx'' points to a containing item (a journal issue, an edited volume) | ** '':container Qxx'' points to a containing item (a [[Item:Q12|BibCollection]] item describing a journal issue, an edited volume) | ||
** '':event Qxx'' points to a corresponding event (a conference iteration, a workshop) | ** '':event Qxx'' points to a corresponding event (an item describing a conference iteration, a workshop). A property pointing to the event location is attached to the LexBib wikibase [[Item:Q6|Event]] item. | ||
** '':abstractLanguage en'' indicates that the abstract contained in the | ** '':abstractLanguage en'' indicates that the abstract contained in the Zotero record is given in [[Item:Q201|English]] (and not in the language of the article, as stated in the "language" field.) | ||
** '':collection x'' points to an Elexifinder collection number | ** '':collection x'' points to an Elexifinder collection number. | ||
** '':type Review'' classifies the item as review article | ** '':type Review'' classifies the item as [[Item:Q15|review article]]. | ||
** '':type Community'' classifies the item as piece of community communication (anniversaries, obituaries, etc.) | ** '':type Community'' classifies the item as piece of [[Item:Q26|community communication]] (anniversaries, obituaries, etc.). | ||
** '':type Report'' classifies the item as event report | ** '':type Report'' classifies the item as [[Item:Q25|event report]]. | ||
==Full text TXT cleaning== | ==Full text TXT cleaning== | ||
* We use the [https://grobid.readthedocs.io GROBID tool]. zotexport.py (see below) leaves a copy of all PDF in a folder, which is processed by GROBID; GROBID produces a TEI-XML representation of the PDF content. The article body (i.e. the part usually starting after the abstract and ending before the references section) is enclosed in a tag called <body>. | * We use the [https://grobid.readthedocs.io GROBID tool]. zotexport.py (see below) leaves a copy of all PDF in a folder, which is processed by GROBID; GROBID produces a TEI-XML representation of the PDF content. The article body (i.e. the part usually starting after the abstract and ending before the references section) is enclosed in a tag called <body>. | ||
* In cases where GROBID fails, we manually isolate the text body from the Zotero-produced TXT ([https://lexbib.org/blog/grobid-txt-validation/ tutorial]). | * In cases where GROBID fails, we manually isolate the text body from the Zotero-produced TXT ([https://lexbib.org/blog/grobid-txt-validation/ tutorial]). Full texts that do not follow a standard structure, most typically because they don't contain an abstract (this is usual in book chapters), are often not properly parsed by GROBID. | ||
=LexBib | =LexBib Wikibase= | ||
LexBib wikibase is the central data repository, where Zotero literal values (text strings) are disambiguated to ontology entities, and bibliographic items and LexVoc terms (as content indicators) come together. Wikibase content can be accessed (GUI, API, SPARQL) by | LexBib wikibase is the central data repository, where Zotero literal values (text strings) are disambiguated to ontology entities, and bibliographic items and LexVoc terms (as content indicators) come together. Wikibase content can be accessed (GUI, API, SPARQL) by everybody, and edited by registered users (manually or API). | ||
==Zotero export== | ==Zotero export== | ||
* Items are exported from Zotero using an own [https://github.com/elexis-eu/elexifinder/blob/master/Zotero/LexBib_JSON.js JSON exporter]. | * Items are exported from Zotero using an own [https://github.com/elexis-eu/elexifinder/blob/master/Zotero/LexBib_JSON.js JSON exporter]. | ||
* That export is processed using [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/zotexport.py zotexport.py]. | * That export is processed using [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/zotexport.py zotexport.py]. The script prepares the upload of the items to Wikibase: | ||
** Author locations and Zotero tags are interpreted. Unknown places and container items are created, so that the bibliographical item can be linked to them. | ** Author locations and Zotero tags are interpreted. Unknown places and container items are created, so that the bibliographical item can be linked to them. | ||
** PDF are stored for GROBID. | ** PDF are stored for GROBID. | ||
Line 46: | Line 49: | ||
==Author Disambiguation: Open Refine== | ==Author Disambiguation: Open Refine== | ||
* For Elexifinder version 2 (spring 2021), we reduced the around 5,000 different person names present by that time | * For Elexifinder version 2 (spring 2021), we reduced the around 5,000 different person names present in the database by that time to around 4,000 unique person items, using clustering algorithms in [http://openrefine.org Open Refine]. Persons in LexBib have up to six name variants (see query at [[Main_Page#See_what.27s_in_the_database|Main Page]]). | ||
* For subsequent updates, we use our own [https://github.com/wetneb/openrefine-wikibase wikibase reconciliation service with open refine]. That means, that person name literals are matched against person items existing in LexBib wikibase. [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/sparql/authorliteralsforopenrefine.rq This query] exports wikibase statements pointing to unmatched persons, and [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/maintenance/newcreatorsfromopenrefine.py newcreatorsfromopenrefine.py] | * For subsequent updates, we use our own [https://github.com/wetneb/openrefine-wikibase wikibase reconciliation service with open refine]. That means, that person name literals are matched against person items existing in LexBib wikibase, where all name literals previously matched to a person item are stored. [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/sparql/authorliteralsforopenrefine.rq This query] exports wikibase statements pointing to unmatched persons, and [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/maintenance/newcreatorsfromopenrefine.py newcreatorsfromopenrefine.py] processes the reconciliation results, creates new items for those names that have remained unmatched, and updates the statements and the literals associated to persons. | ||
* This part of the workflow will soon be simplyfied, as [http://wikibase.cloud wikibase.cloud] developers are about to build OpenRefine into wikibase, i.e. a wikibase.cloud wikibase will by default ship its own Open Refine instance for reconciliation of literal values (i.e. matching literals to wikibase items), and for uploading reconciliation results to wikibase. This means a shortcut for the export-reconciliation-import process described above, wich still involves manual configuration of the Open Refine tool and the own reconciliation service, as well as of the upload process for reconciled data. | |||
==Indexation of bibliographical items with [[LexVoc]] terms== | ==Indexation of bibliographical items with [[LexVoc]] terms== | ||
Line 54: | Line 58: | ||
*# Manually produced full text body TXT. | *# Manually produced full text body TXT. | ||
*# GROBID-produced full text body TXT. | *# GROBID-produced full text body TXT. | ||
*# Zotero-produced " | *# Zotero-produced "pdf2txt" raw TXT. | ||
*# The abstract recorded in Zotero. | *# The abstract recorded in Zotero. | ||
*# The article title recorded in Zotero. | *# The article title recorded in Zotero. | ||
* The script also lemmatizes the text bodies (this works now for English and Spanish, using [https://spacy.io/ SpaCy].) | * The script also lemmatizes the text bodies (this works now for English and Spanish, using [https://spacy.io/ SpaCy].) | ||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/buildtermindex.py buildtermindex.py] finds labels (lexicalisations) of [[LexVoc]] terms in the full text JSON file. | * [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/buildtermindex.py buildtermindex.py] finds labels (lexicalisations) of [[LexVoc]] terms in the full text JSON file. Term labels are also searched for in a lemmatized version (this is relevant for many multiword terms). Term labels that produce many false positives due to ambiguity or parallel use in general language ("article", "case", "example", etc.) are [[LexVoc#Stop-labels|filtered using a stoplist]]. That works now for English and Spanish. | ||
* The script also collects frequency data: | |||
** Mention counts (hits) for the label(s) of each term in each article | ** Mention counts (hits) for the label(s) of each term in each article | ||
** Relative frequency for the label(s) of each term in each article (hits/tokens). | ** Relative frequency for the label(s) of each term in each article (hits/tokens). | ||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/writefoundterms.py writefoundterms.py] | * [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/writefoundterms.py writefoundterms.py] uploads this information to LexBib wikibase. | ||
==Elexifinder export== | ==Elexifinder export== | ||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/export-elexifinder.py elexifinder-export.py] generates a | * [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/export-elexifinder.py elexifinder-export.py] generates a dataset as needed for [[Elexifinder]], based on LexBib wikibase output obtained using SPARQL and API calls. | ||
* The Elexifinder export contains the following | * The Elexifinder export contains one JSON object for each bibliographical item. Following the [https://github.com/elexis-eu/elexifinder/blob/master/rdf2er/lexbib_rdf_elexifinder_json_mapping.json instructions], it contains the following: | ||
** As disambiguated entities: | |||
***authors, author locations, event locations, languages, containing items | |||
*** LexVoc terms found in the full text, as [[LexVoc#Elexifinder_Categories|Elexifinder "categories"]]. | |||
** Publication title and date | |||
** URL of the corresponding Zotero item | |||
** URL of a full text download access (direct download (preferred), 'landing page', or as ''doi.org'' link, etc.). | |||
** The whole full text body, which is in the Elexifinder architecture processed for Wikification (Elexifinder "concepts"). | |||
=Maintenance tasks= | =Maintenance tasks= | ||
A [https://github.com/elexis-eu/elexifinder/tree/master/wikibase/maintenance set of python scripts] performs database maintenance tasks: | |||
* For LexBib wikibase items aligned with Wikidata items (using property [[Property:P2|P2]]): | |||
** Import of preferred labels (rdfs:label) and alias labels (skos:altLabel) from Wikidata. | |||
** Import of values of Wikidata-aligned properties (see lists of properties and Wikidata alignment using [[Main_Page#LexBib_ontology|these queries]]). | |||
* For all items: | |||
** Setting of item descriptions (schema:description) according to class. For example, a BibItem (class [[Item:Q3|Q3]]) recieves a description containing author last names and year, such as "Publication by Kosem & Lindemann (2021)". This is useful for visual disambiguation of items in LexBib search results. | |||
** Updating of statements pointing to redirect items, i.e. to items that have been merged to another item. | |||
* Related to LexVoc: | |||
** Updating of skos:narrower ([[Property:P73|P73]]) relations according to skos:broader ([[Property:P72|P72]]), the inverse relation. | |||
== LexVoc Lexonomy == | |||
A [https://github.com/elexis-eu/elexifinder/tree/master/wikibase/lexvoc-lexonomy set of python scripts] performs transformations from and to Lexonomy XML format. This is needed for [[LexVoc translation on Lexonomy]]. | |||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/lexvoc-lexonomy/buildlexonomy.py buildlexonomy.py] builds [[LexVoc Lexonomy|38 bilingual Lexonomy XML dictionaries]] out of LexVoc SKOS data. | |||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/lexvoc-lexonomy/mergeddict2lwb.py mergeddict2lwb.py] collects translation equivalents from Lexonomy XML ([https://lexonomy.elex.is/LexVoc/ merged on Lexonomy server]), and writes them to LexBib wikibase. | |||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/lexvoc-lexonomy/getdicts.py getdicts.py] collects translation equivalents from Lexonomy XML (single dictionary). | |||
* [https://github.com/elexis-eu/elexifinder/blob/master/wikibase/lexvoc-lexonomy/statsfrommergeddict.py statsfrommergeddict.py] and getstats.py produce data rows about translation progress | |||
= Outlook = | |||
The following tasks are planned, and awaiting a detailed workflow design: | |||
* Indexation of bibliographical items written in languages other than English and Spanish | |||
** As soon as LexVoc translation is completed, and a lemmatization procedure for other languages is implemented. | |||
* Evaluation of [[LexVoc]] terms as content-describing indicators | |||
** Idea: Authors rate the content descriptors (LexVoc terms) assigned to their articles. The rating can be used to improve the indexation process (e.g. discard descriptors repeatedly marked as irrelevant, or prioritize descriptors according to a certain frequency threshold). | |||
* Alignment of [[Item:Q5|person]] (and [[Item:Q11|organization]]) items to Wikidata and VIAF: | |||
** This can be done using Open Refine. An experiment using a subset of LexBib showed that about 25% of LexBib persons are found on Wikidata, and around 40% on VIAF. Person entity data on Wikidata ([https://www.wikidata.org/wiki/Q14981932 example] for an incomplete Wikidata entry) contains ORCID identifiers, among other person metadata, like birth (and death) date, affiliations, etc. Person entity data on VIAF contains reference to authored publications (of all domains), birth (and death) date, etc. | |||
** For persons not found on Wikidata, new Wikidata person items can be created. | |||
** Matching person items on Wikidata can be enriched with LexBib data (authorship relations). | |||
* Alignment of [[Item:Q6|event]] items to Wikidata: | |||
** This has been done for EURALEX and eLex conference series (Wikidata items have been created and described, [https://www.wikidata.org/wiki/Q100594538 example]). | |||
* Alignment of [[Item:Q3|bibliographical items]] to Wikidata (creation of Wikidata items and transfer of bibliodata): | |||
** A DOI-matching experiment has revealed that so far less than 1% of LexBib bibliographical items are found on Wikidata. | |||
** A transfer of bibliodata and author items to Wikidata enables a use of tools like [https://scholia.toolforge.org/ Scholia] for lexicographical articles. | |||
** A transfer of bibliodata to Wikidata enables its inclusion in [https://meta.wikimedia.org/wiki/WikiCite WikiCite] and [https://opencitations.net/ OpenCitations], i.e. into open citation graphs. | |||
** Registering DOI (via [https://www.crossref.org/ Crossref] or [https://support.datacite.org/ DataCite], see [https://support.datacite.org/docs/datacite-or-crossref comparison]) for LexBib articles that do not have such an identifier (the vast majority) would include LexBib articles in citation graphs maintained by commercial providers (Web of Science, Scopus). | |||
* Development of a metadata model for Lexical Resources such as dictionaries, and cataloguing of dictionaries: | |||
** Regarding the data model, work is in progress, see [[Dictionaries|Dictionaries in LexBib]], and [[LexMeta]]. | |||
** Regarding cataloguing, first experiments have been carried out e.g. using datasets from [https://glottolog.org/langdoc Glottolog], an open repository that contains several thousand of dictionary metadata sets, and [https://www.owid.de/obelex/dict/en Obelex-dict], a catalogue of e-dictionaries. | |||
* Definition and representation of bibliographic item relations: | |||
** Citation relation (BibItem A cites BibItem B) | |||
** Review relation (BibItem A reviews BibItem B or Lexical Resource C) |
Latest revision as of 18:22, 12 February 2022
This page contains a summary of how bibliographical data (bibliodata) is processed in LexBib.
- This table contains information about the status of item collections.
- On LexBib main page, a set of queries show contents of LexBib wikibase.
- LexBib About page contains reference to written publications and presentations about LexBib and Elexifinder.
Zotero
Collection of bibliodata
- All bibliodata is stored in LexBib Zotero, which is a "group library" on the Zotero platform. The group library is public, but item attachments (PDF, TXT) are restricted to registered group members (project members only).
- For scraping publication metadata from web pages (e.g. article 'landing pages' in journal or publisher portals), the Zotero software includes so-called translators, which ingest bibliodata as single items or in batches. Zotero will also try to harvest the PDF. If it finds a PDF, it also produces a TXT version.
- We transform bibliodata that reaches us as tabular data to RIS format, with own converters. RIS is straightforwardly imported by Zotero, and, if needed, exported, manipulated using regular expressions, and re-imported.
- We can update the Zotero database using the Zotero API. For example, we can update author first and last names according to their preferred form in LexBib wikibase.
Manual curation
- The editing team uses Zotero group synchronization (tutorial).
- Completeness of publication metadata is manually checked.
- Every item is annotated with the first author's location; the location of the first author is a requirement for the dataset to be exported to Elexifinder. An English Wikipedia page URL (as unambiguous identifier) is placed in the Zotero "extra" field. zotexport.py (see below) maps that to the corresponding LexBib place item (tutorial).
- The Zotero "language" field (publication language) must contain a two-letter ISO-639-1, or a three-letter ISO-639-3 language code.
- In the sources, person names (author, editor) are often disordered or incomplete. We try to validate correct name forms already in this stage. A disambiguation proper (with unambiguous ID) is not possible in Zotero.
- Items are annotated with Zotero tags that contain shortcodes, which are interpreted by zotexport.py. The shortcodes point either to LexBib wikibase items (Q-ID), or to pre-defined values:
- :container Qxx points to a containing item (a BibCollection item describing a journal issue, an edited volume)
- :event Qxx points to a corresponding event (an item describing a conference iteration, a workshop). A property pointing to the event location is attached to the LexBib wikibase Event item.
- :abstractLanguage en indicates that the abstract contained in the Zotero record is given in English (and not in the language of the article, as stated in the "language" field.)
- :collection x points to an Elexifinder collection number.
- :type Review classifies the item as review article.
- :type Community classifies the item as piece of community communication (anniversaries, obituaries, etc.).
- :type Report classifies the item as event report.
Full text TXT cleaning
- We use the GROBID tool. zotexport.py (see below) leaves a copy of all PDF in a folder, which is processed by GROBID; GROBID produces a TEI-XML representation of the PDF content. The article body (i.e. the part usually starting after the abstract and ending before the references section) is enclosed in a tag called <body>.
- In cases where GROBID fails, we manually isolate the text body from the Zotero-produced TXT (tutorial). Full texts that do not follow a standard structure, most typically because they don't contain an abstract (this is usual in book chapters), are often not properly parsed by GROBID.
LexBib Wikibase
LexBib wikibase is the central data repository, where Zotero literal values (text strings) are disambiguated to ontology entities, and bibliographic items and LexVoc terms (as content indicators) come together. Wikibase content can be accessed (GUI, API, SPARQL) by everybody, and edited by registered users (manually or API).
Zotero export
- Items are exported from Zotero using an own JSON exporter.
- That export is processed using zotexport.py. The script prepares the upload of the items to Wikibase:
- Author locations and Zotero tags are interpreted. Unknown places and container items are created, so that the bibliographical item can be linked to them.
- PDF are stored for GROBID.
- Zotero fields are mapped to LexBib wikibase properties.
- New items are assigned a LexBib URI, which is attached to the Zotero item as "link attachment", and in the field "archive location". The Zotero URI of the item is mapped to LexBib wikibase property P16; the Zotero URI of PDF and TXT are attached to that P16 statement as qualifiers.
- bibimport.py uploads the resulting semantic triples ("wikibase statements") to LexBib wikibase.
Author Disambiguation: Open Refine
- For Elexifinder version 2 (spring 2021), we reduced the around 5,000 different person names present in the database by that time to around 4,000 unique person items, using clustering algorithms in Open Refine. Persons in LexBib have up to six name variants (see query at Main Page).
- For subsequent updates, we use our own wikibase reconciliation service with open refine. That means, that person name literals are matched against person items existing in LexBib wikibase, where all name literals previously matched to a person item are stored. This query exports wikibase statements pointing to unmatched persons, and newcreatorsfromopenrefine.py processes the reconciliation results, creates new items for those names that have remained unmatched, and updates the statements and the literals associated to persons.
- This part of the workflow will soon be simplyfied, as wikibase.cloud developers are about to build OpenRefine into wikibase, i.e. a wikibase.cloud wikibase will by default ship its own Open Refine instance for reconciliation of literal values (i.e. matching literals to wikibase items), and for uploading reconciliation results to wikibase. This means a shortcut for the export-reconciliation-import process described above, wich still involves manual configuration of the Open Refine tool and the own reconciliation service, as well as of the upload process for reconciled data.
Indexation of bibliographical items with LexVoc terms
- buildbodytxts.py produces a large JSON file containing the full text bodies, as needed for the indexation process, and for Elexifinder export (see below). Full text bodies are taken from one of the different sources, with the following priority ranking, upon availability:
- Manually produced full text body TXT.
- GROBID-produced full text body TXT.
- Zotero-produced "pdf2txt" raw TXT.
- The abstract recorded in Zotero.
- The article title recorded in Zotero.
- The script also lemmatizes the text bodies (this works now for English and Spanish, using SpaCy.)
- buildtermindex.py finds labels (lexicalisations) of LexVoc terms in the full text JSON file. Term labels are also searched for in a lemmatized version (this is relevant for many multiword terms). Term labels that produce many false positives due to ambiguity or parallel use in general language ("article", "case", "example", etc.) are filtered using a stoplist. That works now for English and Spanish.
- The script also collects frequency data:
- Mention counts (hits) for the label(s) of each term in each article
- Relative frequency for the label(s) of each term in each article (hits/tokens).
- writefoundterms.py uploads this information to LexBib wikibase.
Elexifinder export
- elexifinder-export.py generates a dataset as needed for Elexifinder, based on LexBib wikibase output obtained using SPARQL and API calls.
- The Elexifinder export contains one JSON object for each bibliographical item. Following the instructions, it contains the following:
- As disambiguated entities:
- authors, author locations, event locations, languages, containing items
- LexVoc terms found in the full text, as Elexifinder "categories".
- Publication title and date
- URL of the corresponding Zotero item
- URL of a full text download access (direct download (preferred), 'landing page', or as doi.org link, etc.).
- The whole full text body, which is in the Elexifinder architecture processed for Wikification (Elexifinder "concepts").
- As disambiguated entities:
Maintenance tasks
A set of python scripts performs database maintenance tasks:
- For LexBib wikibase items aligned with Wikidata items (using property P2):
- Import of preferred labels (rdfs:label) and alias labels (skos:altLabel) from Wikidata.
- Import of values of Wikidata-aligned properties (see lists of properties and Wikidata alignment using these queries).
- For all items:
- Setting of item descriptions (schema:description) according to class. For example, a BibItem (class Q3) recieves a description containing author last names and year, such as "Publication by Kosem & Lindemann (2021)". This is useful for visual disambiguation of items in LexBib search results.
- Updating of statements pointing to redirect items, i.e. to items that have been merged to another item.
- Related to LexVoc:
LexVoc Lexonomy
A set of python scripts performs transformations from and to Lexonomy XML format. This is needed for LexVoc translation on Lexonomy.
- buildlexonomy.py builds 38 bilingual Lexonomy XML dictionaries out of LexVoc SKOS data.
- mergeddict2lwb.py collects translation equivalents from Lexonomy XML (merged on Lexonomy server), and writes them to LexBib wikibase.
- getdicts.py collects translation equivalents from Lexonomy XML (single dictionary).
- statsfrommergeddict.py and getstats.py produce data rows about translation progress
Outlook
The following tasks are planned, and awaiting a detailed workflow design:
- Indexation of bibliographical items written in languages other than English and Spanish
- As soon as LexVoc translation is completed, and a lemmatization procedure for other languages is implemented.
- Evaluation of LexVoc terms as content-describing indicators
- Idea: Authors rate the content descriptors (LexVoc terms) assigned to their articles. The rating can be used to improve the indexation process (e.g. discard descriptors repeatedly marked as irrelevant, or prioritize descriptors according to a certain frequency threshold).
- Alignment of person (and organization) items to Wikidata and VIAF:
- This can be done using Open Refine. An experiment using a subset of LexBib showed that about 25% of LexBib persons are found on Wikidata, and around 40% on VIAF. Person entity data on Wikidata (example for an incomplete Wikidata entry) contains ORCID identifiers, among other person metadata, like birth (and death) date, affiliations, etc. Person entity data on VIAF contains reference to authored publications (of all domains), birth (and death) date, etc.
- For persons not found on Wikidata, new Wikidata person items can be created.
- Matching person items on Wikidata can be enriched with LexBib data (authorship relations).
- Alignment of event items to Wikidata:
- This has been done for EURALEX and eLex conference series (Wikidata items have been created and described, example).
- Alignment of bibliographical items to Wikidata (creation of Wikidata items and transfer of bibliodata):
- A DOI-matching experiment has revealed that so far less than 1% of LexBib bibliographical items are found on Wikidata.
- A transfer of bibliodata and author items to Wikidata enables a use of tools like Scholia for lexicographical articles.
- A transfer of bibliodata to Wikidata enables its inclusion in WikiCite and OpenCitations, i.e. into open citation graphs.
- Registering DOI (via Crossref or DataCite, see comparison) for LexBib articles that do not have such an identifier (the vast majority) would include LexBib articles in citation graphs maintained by commercial providers (Web of Science, Scopus).
- Development of a metadata model for Lexical Resources such as dictionaries, and cataloguing of dictionaries:
- Regarding the data model, work is in progress, see Dictionaries in LexBib, and LexMeta.
- Regarding cataloguing, first experiments have been carried out e.g. using datasets from Glottolog, an open repository that contains several thousand of dictionary metadata sets, and Obelex-dict, a catalogue of e-dictionaries.
- Definition and representation of bibliographic item relations:
- Citation relation (BibItem A cites BibItem B)
- Review relation (BibItem A reviews BibItem B or Lexical Resource C)