Child Directed Speech ALE tests 2018-11-08

Agglomerative clustering, test_grammar updated 2018-10-19 , Link Grammar 5.4.4.
This notebook is shared as static Child-Directed-Speech-ALE-2018-11-08_.html
Data -- Child-Directed-Speech-ALE-2018-11-08_ directory.
Previous (reference) tests:

Basic settings

In [1]:
import os, sys, time
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
from src.grammar_learner.utl import UTC
from src.grammar_learner.read_files import check_dir
from src.grammar_learner.write_files import list2file
from src.grammar_learner.widgets import html_table
from src.grammar_learner.pqa_table import table_rows
tmpath = module_path + '/tmp/'
check_dir(tmpath, True, 'none')
table = []
long_table = []
start = time.time()
print(UTC(), ':: module_path =', module_path)
2018-11-08 20:25:56 UTC :: module_path = /home/obaskov/94/language-learning

Corpus test settings

In [2]:
corpus = 'CDS-caps-br-text+brent9mos'
corpus = 'CDS-caps-br-text'
dataset = 'LG-English'
dataset = 'LG-English-clean-clean'  # 2018-10-29: only 100% parsed 
out_dir = module_path + '/output/Child-Directed-Speech-ALE-' + str(UTC())[:10] + '_'
runs = (1,1)
if runs != (1,1): out_dir += '-multi'
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'context'       :   2           ,
    'min_word_count':   1           ,
    'min_link_count':   1           ,
    'max_words'     :   100000      ,
    'max_features'  :   100000      ,
    'min_co-occurrence_count':  1   ,
    'min_co-occurrence_probability': 1e-9,
    'word_space'    :   'sparse'   ,
    'clustering'    :   ('agglomerative', 'ward'),
    'cluster_range' :   (20,200,20,1),
    'cluster_criteria'  : 'silhouette',
    'clustering_metric' : ('silhouette', 'cosine'),
    'cluster_level' :   1           ,
    'grammar_rules' :   2           ,
    'max_disjuncts' :   100000      ,
    'tmpath'        :   tmpath      , 
    'verbose'       :   'min'       ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000        ,
    'categories_generalization': 'off' }
lines = [
    [33, corpus , 'LG-English'                     ,0,0, 'none'  ], 
    [34, corpus , 'LG-English'                     ,0,0, 'rules' ], 
    [35, corpus , 'R=6-Weight=6:R-mst-weight=+1:R' ,0,0, 'none'  ], 
    [36, corpus , 'R=6-Weight=6:R-mst-weight=+1:R' ,0,0, 'rules' ]]
# rp = module_path + '/data/CDS-caps-br-text+brent9mos/LG-English'
rp = module_path + '/data/CDS-caps-br-text/LG-English'  # shorter test 81025
rp = module_path + '/data/CDS-caps-br-text/LG-English-clean-clean'
cp = rp  # corpus path = reference_path :: use 'gold' parses as test corpus

200 clusters

In [3]:
%%capture
kwargs['cluster_range'] = 200
line = [lines[0]]
linez = [lines[0], lines[1]]
out = out_dir + '/200-clusters'
a, _, header = table_rows(lines, out, cp, rp, runs, **kwargs)
display(html_table([header] + a))
In [4]:
display(html_table([header] + a))
LineCorpusParsingLWRWGen.SpaceRulesSilhouettePAPQF1
33CDS-caps-br-textLG-English --- --- nonedALEd200 --- 99%98%0.98
34CDS-caps-br-textLG-English --- --- rulesdALEd182 --- 99%97%0.97
35CDS-caps-br-textR=6-Weight=6:R-mst-weight=+1:R --- --- nonedALEd200 --- 71%47%0.49
36CDS-caps-br-textR=6-Weight=6:R-mst-weight=+1:R --- --- rulesdALEd200 --- 71%47%0.49

Input parses cleanup influence (TL;DR: no influence)

Both input parses (training set) and test set are clean

"clean-clean" dataset -- all incomplete parses removed

In [5]:
%%capture
corpus = 'CDS-caps-br-text'
dataset = 'LG-English-clean-clean'  # 2018-10-29: only 100% parsed sentences
out = out_dir + '/clean-training-set'
t1 = []
table = []
crange = kwargs['cluster_range']
for kwargs['cluster_range'] in range(20,201,20):
    average, _, header = table_rows(line, out, cp, rp, runs, **kwargs)
    t1.extend(average)
    table.extend(average)
kwargs['cluster_range'] = crange
In [6]:
display(html_table([header] + t1))
LineCorpusParsingLWRWGen.SpaceRulesSilhouettePAPQF1
33CDS-caps-br-textLG-English --- --- nonedALEd20 --- 99%81%0.82
33CDS-caps-br-textLG-English --- --- nonedALEd40 --- 99%89%0.90
33CDS-caps-br-textLG-English --- --- nonedALEd60 --- 99%92%0.93
33CDS-caps-br-textLG-English --- --- nonedALEd80 --- 99%94%0.94
33CDS-caps-br-textLG-English --- --- nonedALEd100 --- 99%96%0.97
33CDS-caps-br-textLG-English --- --- nonedALEd120 --- 99%97%0.97
33CDS-caps-br-textLG-English --- --- nonedALEd140 --- 99%97%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd160 --- 99%98%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd180 --- 99%98%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd200 --- 99%98%0.98

Input parses (training set) as-was, no cleanup. Test set -- "clean-clean"

In [7]:
%%capture
corpus = 'CDS-caps-br-text'
dataset = 'LG-English'  # initial parses, no cleanup
out = out_dir + '/basic-training-set'
t2 = []
for kwargs['cluster_range'] in range(20,201,20):
    average, _, header = table_rows(line, out, cp, rp, runs, **kwargs)
    t2.extend(average)
    table.extend(average)
In [8]:
display(html_table([header] + t2))
LineCorpusParsingLWRWGen.SpaceRulesSilhouettePAPQF1
33CDS-caps-br-textLG-English --- --- nonedALEd20 --- 99%81%0.82
33CDS-caps-br-textLG-English --- --- nonedALEd40 --- 99%89%0.90
33CDS-caps-br-textLG-English --- --- nonedALEd60 --- 99%92%0.93
33CDS-caps-br-textLG-English --- --- nonedALEd80 --- 99%94%0.94
33CDS-caps-br-textLG-English --- --- nonedALEd100 --- 99%96%0.97
33CDS-caps-br-textLG-English --- --- nonedALEd120 --- 99%97%0.97
33CDS-caps-br-textLG-English --- --- nonedALEd140 --- 99%97%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd160 --- 99%98%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd180 --- 99%98%0.98
33CDS-caps-br-textLG-English --- --- nonedALEd200 --- 99%98%0.98