Back to LanguageTool Homepage - Privacy - Imprint

Languagetool-scritping to find lines containing errors

Hey there, I work as a volunteer for the Common Voice Project by Mozilla. This project needs enormous numbers of sentences that people can record to create a dataset for speech recognition. I want to import different sentences corpus’ into this project, containing hundreds of thousands of sentences, sometimes millions of sentences. Manual review is not possible, that’s why I thought it might be a good idea to write a script using languagetool that checks every line of a file, and if it contains a (red) error it deletes it completely.

Would this be possible with the languagetool-api? I basically need two things:

  • Capacity for mass-checks of hundreds of thousands of sentences in a acceptable time (maybe in one hour or so)
  • A way to just know if an error exists in a line, but it is irrelevant where or what kind of error it is.

I am just starting to understand the API, so maybe I can answer this myself in a few days, but I would like to hear your thoughts about this.

Hi, I think the LT API can do what you need. But the HTTP API has limitations, so if you need to check thousands of sentences you should install LT locally (http://wiki.languagetool.org/http-server). Also, make sure to install the ngram data so all error detection rules are active (http://wiki.languagetool.org/finding-errors-using-n-gram-data). Performance depends on the language, 20ms per sentence might be a good value for estimating the total time.

1 Like

Stefan,

I wrote a little Java application based on LanguageTool for the same purpose: filtering Catalan Wikipedia sentences for the Common Voice project. We take account of errors detected by LanguageTool and other conditions like sentence length. See: https://github.com/Softcatala/filter-wiki-corpus-lt

Unfortunately the Common Voice team rejected this approach. They required to run themselves the Wikipedia filtering because of licensing issues. We had to use the tools made by Mozilla, and we got lower quality results. So make sure that your work is going to be accepted before you start working.

1 Like

That’s great! Thanks for your link.

I know this process and it really isn’t ideal. But you can delete sentences after the import, I did that for German once. (I deleted sentences containing non-german letters)

I want to use this tool for two things: preparing sentences for the sentence collector and analysing big sentence corpus’ like the Europarl corpus.

So here is a first little version of a bash script, right now it is a dirty hack that only checks if anything is said about a sentence, I will check more details in the future:

I’ve chosen Esperanto for testing because it has less rules than German so there are fewer false negatives that are just style comments. But in the future this script will work for any language and I will ignore some kinds of errors.