Thanks, I’ll use it!
Week 6: 10 June
Working on the multiple languages support for the suggestions orderer, hope to deploy tonight.
Found a way to painless use XGBoost with java: jpmml-xgboost.
Week 7: 11 June
Training data preprocessing (took more time than I thought).
Working on the multiple languages support for the suggestions orderer: added mock models for all the languages – only for the true models learning time.
Week 7: 12 June – 15 June
- Studying jpmml-xgboost
- Working on features extractor update
- Training the models
The quality of the rule-specific models.
Will now measure current released solution’s quality.
There is no enough data for some languages to train and validate models, so I’ll group them. Now also will play with grouping all the languages and the subsets. Will also collect and use POS-tags ngrams frequency from the correct sentences data.
Committed a version with PMML syntax-based models, so it now can be installed without painful extra dependencies handling – that was the main problem of the original xgboost models evaluator.
Now the model requires ngram data to work and i’m now working on models for these languages that don’t have ngram data. The integration is almost done.
- finish the automatical ngram data presence handling – if there is no ngram data for the language, the proper model should be chosen automatically
- finish models for the languages not using ngram data
- improve all the models
I’ve also had to spend a week without my laptop. During this time I learned some gradle-maven migration info so I’ve committed a couple of migration steps then. It now builds without errors, but not all the tests are passing and the final .zip package is not created yet.
Is there any code showing how to create a lucene index for ngrams? Does lucene build 1grams 2grams and 3grams just from the text or the frequencies should be counted manually and then given to lucene?
Lucene is very low level, it just takes an ngram and its count, so you need to do everything manually. There’s AggregatedNgramToLucene that takes a text file and turns it into a Lucene index.
Thanks! Will use it to store ngrams of POStags
As your changes have been merged now, do you have an up-to-date evaluation that shows by how much results have been improved due to your changes? Also, are there any performance issues to be expected when the feature is activated?
Also, do you have some specific example where your new code improved suggestion ordering? I’d like to try it.
Please see my review comments at https://github.com/languagetool-org/languagetool/pull/1115
Will provide the evaluation in the upcoming couple of days.
Isn’t your code active for German or didn’t you run an evaluation for it?
I have opened some issues for that I think are the remaining issues to activate this feature for the production system:
Edited the post and added missing evaluation.
Thanks, is the code available to re-run the evaluation? What are your future plans, is there a chance you’re going to work on the remaining issues linked above?
GSoC 2018 Work Summary
What was done
During this Summer of Code I worked on several tasks.
First, the improvment of spellchecker suggestions sorting using machine learning approach included the following submissions in the languagetool repository on GitHub:
Code for the model learning part is in this repo.
The ordering of suggestions is now done with the predictions of the trained model (xgboost was used), the quality of the resulting sorting was improved.
Second, switching to the modern server-side framework:
- #1046 (open)
Third, migration from Maven to Gradle:
- #1045 (open)
I’ m willing to continue contributing to languagetool outside GSoC, in particular I plan to do the following within my project:
- further improve the ml model quality (parameter tuning, feature engineering, adding new features)
- finish transition to Gradle
- finish transition to Spring
- adress all suggested corrections and get open PRs merged.
Oleg, thanks for taking part in GSoC! We’re looking forward to your future contributions.