Back to LanguageTool Homepage - Privacy - Imprint

Chinese part development daily record

(Daniel Naber) #76

For ngrams, we use Lucene in some cases, as mentioned here: - it means you need a fast hard disk (SSD), but memory usage will be very low, as only the index needs to be in memory. While Lucene is a fulltext search engine, we basically use it for look up: we provide the ngram as a search term, and get back its occurrence count. That plus some calculation and you have a very basic language model. Let me know if you need to know more.

(Ze Dang) #77

I read the codes from languagetool/languagemodel. LuceneLanguageModel class can calculate the probability of a complete sentence or return the occurrence count of a series of words when using appropriate ngram data format. But in my codes, the ngram probabilities are calculated by a back-off model and saved in the ngram data. So, if I need to use Lucene, I need to write a helper class with Lucene?

(Daniel Naber) #78

So your question is how to build such a Lucene index, is that correct? You can check out for some code that creates an index. Actually, all the classes that use Lucene’s IndexWriter do this.

(Daniel Naber) #79

So I need to compile your fork first with Maven, is that right? And then copy the two files into the result? For me, that doesn’t work yet, e.g. your second example doesn’t find an error yet. Any idea?

(Ze Dang) #80

I created the jar by typing mvn package in language-module/zh directory. Then installation should be the same as last time. I followed the same steps as last time, it worked. Or you can try the following steps.

  • Download codes from my github repository.
  • Run mvn install -DskipTests in root directory.
  • Download and then extract it to languagetool-standalone/target/LanguageTool-4.2-SNAPSHOT/LanguageTool-4.2-SNAPSHOT/org/languagetool/resource.
  • Download word_trigram.binary and char_unigram.binary. Copy them to languagetool-standalone/target/LanguageTool-4.2-SNAPSHOT/LanguageTool-4.2-SNAPSHOT/org/languagetool/resource/zh.

(Daniel Naber) #81

Thanks, that works for zh-TW. For zh-CN, I get: Exception in thread "main" java.lang.RuntimeException: Path zh/char_unigram.binary not found in class path at /org/languagetool/resource/zh/char_unigram.binary

(Ze Dang) #82

Add link above. You should download and copy it to the path which is the same as word_trigram.binary.

(Daniel Naber) #83

Okay, it’s working now I think. It’s slow only because of the one-time setup, isn’t it? Have you checked the performance per sentence (e.g in “sentences per second”), not considering the setup time?

(Ze Dang) #84

Set up: about 7s.
Check: about 120 sentences per second.

(Ze Dang) #85

GSoC Phase 3


  • Make ChineseNgramProbabilityRule available for zh-TW.
  • Optimaze the checking speed of the rule and lower memory usage.
  • Fix bugs.

(Daniel Naber) #86

What about memory usage, are you working on lowering that?

(Ze Dang) #87


(Daniel Naber) #88

Great - please also remember to post short but daily reports here.

(Ze Dang) #89

Hi, dnaber

I have make a comparison for my new rule with Lucene Based solution and BerkeleyLM Based solustion.

Name Rule Setup Time Sec per sentence Memory Usage Ngram Data Size
Lucene 8s 4s 2G 3.65G(Lucene index)
BerkerleyLM 3s 0.1s 4G 1.7G(hash based LM binary)

(Daniel Naber) #90

What kind of hard disk did you use for this test? An SSD?

(Ze Dang) #91

SSD. I trained language model again to improve accuracy and find a bug in my test code. The bug is that I actived ChineseNgramProbabilityRule in SimplifiedChinese then I created an instance of ChineseNgramProbabilityRule again. So the memory loads ngram data twice that it takes 8G to run.
After I fixed the bug, I can run java -jar languagetool-commandline -l zh-CN <text> without -Xmx8000m.

(Daniel Naber) #92

How often are you running a lookup per sentence? I just wonder that Lucene is that much slower than BerkeleyLM.

(Ze Dang) #93

In order to find each right character in the sentence, the rule will replace every char with chars in a confusion dictionary and calculate the prob of that sentence.(The right sentence is regarded as the max prob one.) So the longer the sentence is, the more query runs.

(Ze Dang) #94

July 28th

  • Ngram Rule supports zh-TW now.

As I have said above, in order to find the replacement of error characters in my ngram rule, we can’t avoid quering. Unlike Enligh language which has only 26 characters, Chinese language has more than 7000 characters. So the size of query table is totally different.

I tried to make Lucene based way to run faster. But it turns that the fastest speed is 1800ms per sentence while Berkeley one is 80ms per sentence. Also I tried to train a smaller language model to make Berkeley one use fewer memory. However, results showed that though a smaller LM can reduce the memory usage, it greatly decreased checking accurancy.

What’s your idea?

(Daniel Naber) #95

I see. BerkeleyLM’s memory use might make it difficult to get this into production. Could you have both versions in the code, so one can switch between them (doesn’t need to be at runtime, a small code switch would be enough)?