There is currently some duplication between the various page objects
and other code. Those should be cleaned up incrementally.
Change-Id: I592829a00ca65bbecd5399b773c885c764c1cc06
This is a basic test that checks the writing preferences
does not result in an error.
It depends on I8fdc1a05 and I18a5ffb5.
It is very rudimentary: The saving test only checks that the API call
didn't result in an error. It should be enough to check for bug 41763,
but it doesn't really check that the options were written correctly.
It also doesn't do anything to distinguish anonymous and logged-in
users, the preferences for which work differently.
Change-Id: I417089eea64e8a84bd38a00bbb31cbb77dc5bd68
The 'als' is used in a non-standard way in MediaWiki -
it may be used to represent the Allemanic language,
the standard code of which is 'gsw', while 'als'
is ISO 639 3 refers to Tosk Albanian, which is
not currently used in any way in MediaWiki.
This local fix adds a redirect for it.
Also add a test to check that it works correctly.
Change-Id: Id904cab129eb58f8b96ce493e77d21da7c44ea8b
A very simple mechanism for importing per-country language lists
from CLDR to ULS' langdb.
If I understand correctly, we only need languages spoken in a country
ordered by number of speakers. The CLDR data already has it and it should be
mostly useful.
Also added a utility function and a test.
Some tweaks to override the CLDR data are still needed:
* The data as it is omits some useful languages. For example, Amharic is not
listed in Eritrea.
* Some countries have a very large number of languages. Ideally it's right,
but is not practical currently, for example India with 75. Maybe
hand-picking or limiting the choice to top X languages can be useful,
but requires thought.
* Some language codes are standard, but different from Wikipedia practice,
for example "pa_Guru" (we just write "pa"). Maybe a mapping of codes
is needed.
Change-Id: I3c0cd5a9118997ba39a4f3695978e359f3de6956
Default value of this option will be $.uls.data.autonyms().
It can be set to limit language selection to a set of languages.
Updated examples, used a config variable wgULSLanguages to set this.
Change-Id: Ia322cbdcdb14f08619d2e4df5b23e2702841d147
This does not add much functionality. That will come in
future commits.
This commit has many cleanup, refactoring on the display settings,
language settings code.
Change-Id: I7fbc3ebb9b67c1afd80f159c2d82cd2a1c6bea74
Multiple scripts and autonyms are not currently supported by ULS utils.
I commented out the problematic lines and left the current de-facto
situation in Wikipedia.
This is a TODO item, but it needs proper specification.
Also updated the tests somewhat.
Change-Id: I8cdc6ae430f5bb5af4b1890abf6e71a91b6beb3d
Added license and upstream information and a
script to compile repoconfig automatically
from metadata.
also contains integration code
Change-Id: Ib39668249dd568a1f6017f0c08a3b9d1e2067ae4
DRAFT
Some languages don't yet have autonyms, but probably all languages should
have them. It currently fails, because there are many such languages and
I am not aware of a nice way to skip it in QUnit (there is a proper skipping
mechanism in Perl testing, for example).
Also change orphanScripts() to return all scripts without groups.
Change-Id: I21c13d7bba89f3b991bc340e6ff6e040c704beb8
* Introduce Levenshtein algorithm
* New API param 'typos' to give number of typos allowed
* test cases
Change-Id: I22bf34d08a910d1509d7eab5adc292eadc9a7c7d
Non-ASCII characters broke the functionality.
Saving as UTF-8 fixes this.
Also removed an unneded <?php statement from a JS file and replaced
non-ASCII pretty apostrophes with straight ASCCI.
Change-Id: Ic6719fe0863bc5d8ae19abbf01cfbb7b2b714f12
Tests about the Greek script failed because new languages were added to
the language database. Also, it had to be deepEqual and not strictEqual.
I fixed the tests.
Change-Id: I68cf0593354d71bd35c53bac5afe7cabd25182a1