Mean shift clustering "Gutenberg Children Books" 2019-03-28

"Gutenberg Children Books" corpus, new "LG-E-noQuotes" dataset (GC_LGEnglish_noQuotes_fullyParsed.ull),
trash filter off: min_word_count = 31,21,11,6,2,1, max_sentence_length off, Link Grammar 5.6
.

This notebook is shared as static Mean-shift-clustering-GCB-LG-E-noQuotes-2019-03-28.html.
Output data shared via Mean-shift-clustering-GCB-LG-E-noQuotes-2019-03-28 directory.

Basic settings

In [1]:
import os, sys, time
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
from src.grammar_learner.utl import UTC, test_stats
from src.grammar_learner.read_files import check_dir, check_corpus
from src.grammar_learner.write_files import list2file
from src.grammar_learner.widgets import html_table
from src.grammar_learner.pqa_table import table_rows, params, wide_rows
tmpath = module_path + '/tmp/'
check_dir(tmpath, True, 'none')
start = time.time()
runs = (1,1)
print(UTC(), ':: module_path:', module_path)
2019-03-28 06:36:08 UTC :: module_path: /home/obaskov/94/ULL

Corpus test settings

In [2]:
corpus = 'GCB' # 'Gutenberg-Children-Books-Caps' 
dataset = 'LG-E-noQuotes'  # 'LG-E-clean'
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'context'       :   1           ,
    'min_word_count':   1           ,  # 11, 1
    'word_space'    :   'sparse'    ,
    'clustering'    :   ('mean_shift', 2),
    'clustering_metric' : ['silhouette', 'cosine'],
    'cluster_range' :   [0]         ,   # auto
    'top_level'     :   0.01        ,
    'grammar_rules' :   2           ,
    'max_disjuncts' :   1000000     ,   # off
    'stop_words'    :   []          ,
    'tmpath'        :   tmpath      ,
    'verbose'       :   'log+'      ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000        }
rp = module_path + '/data/GCB/LG-E-noQuotes/'
cp = rp  # corpus path = reference_path
out_dir = module_path + '/output/' + 'Mean-shift-GCB-LG-noQuotes-' + str(UTC())[:10]
print(UTC(), '\n', out_dir)
2019-03-28 06:36:08 UTC 
 /home/obaskov/94/ULL/output/Mean-shift-GCB-LG-noQuotes-2019-03-28

Tests: min_word_count = 31, 21, 11, 6, 2

In [3]:
%%capture
table = []
line = [['', corpus, dataset, 0, 0, 'none']]
kwargs['min_word_count'] = 31
a, _, header, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
header[0] = ''
table.extend(a)
In [4]:
display(html_table([header] + a)); print(test_stats(log))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---334131---0.057%52%0.61[1, 0]
Cleaned dictionary: 3341 words, grammar learn time: 00:34:11, grammar test time: 00:18:14
In [6]:
display(html_table([header] + a)); print(test_stats(log))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---441721---0.059%54%0.63[1, 0]
Cleaned dictionary: 4417 words, grammar learn time: 01:03:23, grammar test time: 00:19:23
In [8]:
display(html_table([header] + a)); print(test_stats(log))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---680111---0.062%57%0.65[57, 3, 2, 1, 0]
Cleaned dictionary: 6866 words, grammar learn time: 03:05:49, grammar test time: 00:20:23
In [12]:
display(html_table([header] + a)); print(test_stats(log))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---87356---0.063%59%0.67[1251, 7, 4, 3, 2]
Cleaned dictionary: 10053 words, grammar learn time: 09:18:28, grammar test time: 00:22:33
In [16]:
display(html_table([header] + a)); print(test_stats(log))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---80892---0.064%61%0.67[11194, 4, 3, 2, 1]
Cleaned dictionary: 19326 words, grammar learn time: 71:29:28, grammar test time: 00:21:04

Save results

In [17]:
display(html_table([header] + table))
CorpusParsingSpaceLinkageAffinityG12nThresholdRulesMWCNNSIPAPQF1Top 5 cluster sizes
GCBLG-E-noQuotescMLEdmean_shift---none---334131---0.057%52%0.61[1, 0]
GCBLG-E-noQuotescMLEdmean_shift---none---441721---0.059%54%0.63[1, 0]
GCBLG-E-noQuotescMLEdmean_shift---none---680111---0.062%57%0.65[57, 3, 2, 1, 0]
GCBLG-E-noQuotescMLEdmean_shift---none---87356---0.063%59%0.67[1251, 7, 4, 3, 2]
GCBLG-E-noQuotescMLEdmean_shift---none---80892---0.064%61%0.67[11194, 4, 3, 2, 1]
In [18]:
print(UTC(), ':: 5 tests finished, elapsed', str(round((time.time()-start)/3600.0, 1)), 'hours')
table_str = list2file(table, out_dir + '/all_tests_table.txt')
print('Results saved to', out_dir + '/all_tests_table.txt')
2019-03-31 21:49:11 UTC :: 5 tests finished, elapsed 87.2 hours
Results saved to /home/obaskov/94/ULL/output/Mean-shift-GCB-LG-noQuotes-2019-03-28/all_tests_table.txt

Test with min_word_count = 1: 110 hours, F1 ~0.66

In [ ]:
%%capture
kwargs['min_word_count'] = 1
a, _, h, log, rules = wide_rows(line, out_dir, cp, rp, runs, **kwargs)
table.extend(a)
In [ ]:
display(html_table([header] + a)); print(test_stats(log))

Test finished 2019-04-05 10:50 UTC:
Clean corpus size 22641, Grammar learn time 110:27:19, Grammar test time 00:27:12.
PA = 63%, PQ = 60%, F1 = 0.66, Top 5 cluster sizes: [14527, 4, 3, 2, 1]