Gutenberg Children Books 2018-12-14

LG-English, agglomerative clustering, 500 clusters, min_word_count = 31/21/11/2/1

Link Grammar 5.5.1, test_grammar updated 2018-10-19.
This notebook is shared as static Gutenberg-Children-Books-2018-12-14.html.
Output data shared via Gutenberg-Children-Books-2018-12-14 directory.

Basic settings

In [1]:
import os, sys, time
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
from src.grammar_learner.utl import UTC, test_stats
from src.grammar_learner.read_files import check_dir, check_corpus
from src.grammar_learner.write_files import list2file
from src.grammar_learner.widgets import html_table
from src.grammar_learner.pqa_table import table_rows, params, wide_rows
tmpath = module_path + '/tmp/'
check_dir(tmpath, True, 'none')
start = time.time()
runs = (1,1)

Corpus test settings

In [2]:
corpus = 'GCB' # 'Gutenberg-Children-Books-Caps' 
dataset = 'LG-English'
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'context'       :   1           ,
    'min_word_count':   31          ,   # 31/21/11/2/1
    'word_space'    :   'sparse'    ,
    'clustering'    :   ['agglomerative', 'ward'],
    'clustering_metric' : ['silhouette', 'cosine'],
    'cluster_range' :   500        ,
    'top_level'     :   0.01        ,
    'grammar_rules' :   2           ,
    'max_disjuncts' :   1000000     ,   # off
    'tmpath'        :   tmpath      , 
    'verbose'       :   'min'       ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000        }
rp = module_path + '/data/' + corpus + '/LG-E-clean/GCB-LG-English-clean.ull'
cp = rp  # corpus path = reference_path
runs = (1,1)
out_dir = module_path + '/output/' + 'Gutenberg-Children-Books-' + str(UTC())[:10]
if check_corpus(rp, 'min'): print(UTC(), out_dir)
2018-12-14 10:33:00 UTC /home/obaskov/94/language-learning/output/Gutenberg-Children-Books-2018-12-14

Tests: "LG English", 500 clusters, min_word_count = 31/21/11/2/1

In [3]:
%%capture
table = []
kwargs['min_word_count'] = 31
line = [[45, corpus, dataset, 0, 0, 'none']]
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [4]:
display(html_table([header] + a)); print(test_stats(log))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
45GCBLG-EnglishcALWEdwardeuclideannone---50031---0.050%44%0.51[1002, 466, 437, 384, 381]
Cleaned dictionary: 7057 words, grammar learn time: 00:19:28, test time: 01:03:17 (h:m:s)
In [5]:
%%capture
kwargs['min_word_count'] = 21
line = [[46, corpus, dataset, 0, 0, 'none']]
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [6]:
display(html_table([header] + a)); print(test_stats(log))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
46GCBLG-EnglishcALWEdwardeuclideannone---50021---0.051%46%0.52[1028, 625, 620, 584, 551]
Cleaned dictionary: 8929 words, grammar learn time: 00:22:09, test time: 01:03:40 (h:m:s)
In [7]:
%%capture
kwargs['min_word_count'] = 11
line = [[47, corpus, dataset, 0, 0, 'none']]
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [8]:
display(html_table([header] + a)); print(test_stats(log))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
47GCBLG-EnglishcALWEdwardeuclideannone---50011---0.052%47%0.53[1096, 1028, 907, 848, 716]
Cleaned dictionary: 12790 words, grammar learn time: 00:44:23, test time: 01:09:05 (h:m:s)
In [9]:
display(html_table([header] + table))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
45GCBLG-EnglishcALWEdwardeuclideannone---50031---0.050%44%0.51[1002, 466, 437, 384, 381]
46GCBLG-EnglishcALWEdwardeuclideannone---50021---0.051%46%0.52[1028, 625, 620, 584, 551]
47GCBLG-EnglishcALWEdwardeuclideannone---50011---0.052%47%0.53[1096, 1028, 907, 848, 716]
In [10]:
%%capture
kwargs['min_word_count'] = 2
line = [[48, corpus, dataset, 0, 0, 'none']]
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [11]:
display(html_table([header] + a)); print(test_stats(log))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
48GCBLG-EnglishcALWEdwardeuclideannone---5002---0.054%48%0.54[5491, 3805, 3098, 1724, 964]
Cleaned dictionary: 31700 words, grammar learn time: 06:19:28, test time: 01:09:13 (h:m:s)
In [12]:
display(html_table([header] + table))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
45GCBLG-EnglishcALWEdwardeuclideannone---50031---0.050%44%0.51[1002, 466, 437, 384, 381]
46GCBLG-EnglishcALWEdwardeuclideannone---50021---0.051%46%0.52[1028, 625, 620, 584, 551]
47GCBLG-EnglishcALWEdwardeuclideannone---50011---0.052%47%0.53[1096, 1028, 907, 848, 716]
48GCBLG-EnglishcALWEdwardeuclideannone---5002---0.054%48%0.54[5491, 3805, 3098, 1724, 964]
In [13]:
%%capture
kwargs['min_word_count'] = 1
line = [[49, corpus, dataset, 0, 0, 'none']]
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [14]:
display(html_table([header] + a)); print(test_stats(log))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
49GCBLG-EnglishcALWEdwardeuclideannone---5001---0.053%48%0.54[7303, 4580, 2802, 2159, 1486]
Cleaned dictionary: 37087 words, grammar learn time: 09:11:22, test time: 01:18:45 (h:m:s)

Save results

In [15]:
display(html_table([header] + table))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
45GCBLG-EnglishcALWEdwardeuclideannone---50031---0.050%44%0.51[1002, 466, 437, 384, 381]
46GCBLG-EnglishcALWEdwardeuclideannone---50021---0.051%46%0.52[1028, 625, 620, 584, 551]
47GCBLG-EnglishcALWEdwardeuclideannone---50011---0.052%47%0.53[1096, 1028, 907, 848, 716]
48GCBLG-EnglishcALWEdwardeuclideannone---5002---0.054%48%0.54[5491, 3805, 3098, 1724, 964]
49GCBLG-EnglishcALWEdwardeuclideannone---5001---0.053%48%0.54[7303, 4580, 2802, 2159, 1486]
In [16]:
print(UTC(), ':: finished, elapsed', str(round((time.time()-start)/3600.0, 1)), 'hours')
table_str = list2file(table, out_dir + '/all_tests_table.txt')
print('Results saved to', out_dir + '/all_tests_table.txt')
2018-12-15 09:14:01 UTC :: finished, elapsed 22.7 hours
Results saved to /home/obaskov/94/language-learning/output/Gutenberg-Children-Books-2018-12-14/all_tests_table.txt