Child Directed Speech test 2018-08-14

Updated (2018-08-14) Grammar Tester, server 94.130.238.118
Each line is calculated 1x, parsing metrics tested 1x for each calculation.
The calculation table is shared as 'short_table.txt' in data folder
http://langlearn.singularitynet.io/data/clustering_2018/Child-Directed-Speech-2018-08-14/
This notebook is shared as static html via
http://langlearn.singularitynet.io/data/clustering_2018/html/Child-Directed-Speech-2018-08-14.html
Thhe comstituency test (multi-eun version of this notebook is shared via
http://langlearn.singularitynet.io/data/clustering_2018/html/Child-Directed-Speech-2018-08-14.html

Basic settings

In [1]:
import os, sys, time
from IPython.display import display
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
grammar_learner_path = module_path + '/src/grammar_learner/'
if grammar_learner_path not in sys.path: sys.path.append(grammar_learner_path)
from utl import UTC
from read_files import check_dir
from widgets import html_table
from pqa_table import table_cds
tmpath = module_path + '/tmp/'
if check_dir(tmpath, True, 'none'):
    table = []
    long_table = []
    header = ['Line','Corpus','Parsing','LW','"."','Generalization','Space','Rules','PA','PQ']
    start = time.time()
    print(UTC(), ':: module_path =', module_path)
else: print(UTC(), ':: could not create temporary files directory', tmpath)
2018-08-14 12:56:59 UTC :: module_path = /home/obaskov/language-learning

Corpus test settings

In [2]:
out_dir = module_path + '/output/Child-Directed-Speech-' + str(UTC())[:10]
runs = (1,1)    # (attempts to learn grammar per line, grammar tests per attempt)
if runs != (1,1): out_dir += '-multi'
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'clustering'    :   ('kmeans', 'kmeans++', 10),
    'cluster_range' :   (120,30,3)  , # max, min, repeat
    'cluster_criteria': 'silhouette',
    'cluster_level' :   1           ,
    'tmpath'        :   tmpath      , 
    'verbose'       :   'min'       ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000        ,
    'categories_generalization': 'off' }
lines = [
    [58, 'CDS-caps-br-text+brent9mos' , 'LG-English'                     ,0,0, 'none'  ], 
    [59, 'CDS-caps-br-text+brent9mos' , 'LG-English'                     ,0,0, 'rules' ], 
    [60, 'CDS-caps-br-text+brent9mos' , 'R=6-Weight=6:R-mst-weight=+1:R' ,0,0, 'none'  ], 
    [61, 'CDS-caps-br-text+brent9mos' , 'R=6-Weight=6:R-mst-weight=+1:R' ,0,0, 'rules' ]]
rp = module_path + '/data/CDS-caps-br-text+brent9mos/LG-English'
cp = rp  # corpus path = reference_path :: use 'gold' parses as test corpus

ULL Project Plan ⇒ Parses ⇒ lines 58-61, by columns

Connectors-DRK-Connectors

In [3]:
%%capture
kwargs['context'] = 1
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = 'kmeans'
kwargs['grammar_rules'] = 1
average21, long21 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average21)
long_table.extend(long21)
In [4]:
display(html_table([header]+average21))
print(UTC())
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
58CDS-caps-br-text+brent9mosLG-English -- -- nonecDRKc9969%50%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKc8770%44%
2018-08-14 14:19:22 UTC

Connectors-DRK-Disjuncts

In [5]:
%%capture
kwargs['context'] = 1
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = 'kmeans'
kwargs['grammar_rules'] = 2
average22, long22 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average22)
long_table.extend(long22)
In [6]:
display(html_table([header]+average22))
print(UTC())
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
58CDS-caps-br-text+brent9mosLG-English -- -- nonecDRKd9961%47%
59CDS-caps-br-text+brent9mosLG-English -- -- rulescDRKd8463%47%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKd6568%42%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulescDRKd6566%40%
2018-08-14 18:33:29 UTC

Disjuncts-DRK-Disjuncts

In [7]:
%%capture
kwargs['context'] = 2
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = 'kmeans'
kwargs['grammar_rules'] = 2
average23, long23 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average23)
long_table.extend(long23)
In [8]:
display(html_table([header]+average23))
print(UTC())
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
58CDS-caps-br-text+brent9mosLG-English -- -- nonedDRKd10058%45%
59CDS-caps-br-text+brent9mosLG-English -- -- rulesdDRKd6663%47%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonedDRKd10062%37%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulesdDRKd10062%37%
2018-08-14 19:12:01 UTC

Disjuncts-ILE-Disjuncts

In [9]:
%%capture
kwargs['context'] = 2
kwargs['word_space'] = 'discrete'
kwargs['clustering'] = 'group'
kwargs['grammar_rules'] = 2
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
In [10]:
display(html_table([header]+average24))
print(UTC())
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
58CDS-caps-br-text+brent9mosLG-English -- -- nonedILEd298040%37%
59CDS-caps-br-text+brent9mosLG-English -- -- rulesdILEd242441%37%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonedILEd35580%0%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulesdILEd341548%31%
2018-08-14 19:22:13 UTC

All tests

In [11]:
display(html_table([header]+long_table))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
58CDS-caps-br-text+brent9mosLG-English -- -- nonecDRKc 99 69%50%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKc 87 70%44%
58CDS-caps-br-text+brent9mosLG-English -- -- nonecDRKd 99 61%47%
59CDS-caps-br-text+brent9mosLG-English -- -- rulescDRKd 84 63%47%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKd 65 68%42%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulescDRKd 65 66%40%
58CDS-caps-br-text+brent9mosLG-English -- -- nonedDRKd 100 58%45%
59CDS-caps-br-text+brent9mosLG-English -- -- rulesdDRKd 66 63%47%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonedDRKd 100 62%37%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulesdDRKd 100 62%37%
58CDS-caps-br-text+brent9mosLG-English -- -- nonedILEd 2980 40%37%
59CDS-caps-br-text+brent9mosLG-English -- -- rulesdILEd 2424 41%37%
60CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- nonedILEd 3558 0%0%
61CDS-caps-br-text+brent9mosR=6-Weight=6:R-mst-weight=+1:R -- -- rulesdILEd 3415 48%31%
In [12]:
from write_files import list2file
print(UTC(), ':: finished, elapsed', str(round((time.time()-start)/3600.0, 1)), 'hours')
table_str = list2file(table, out_dir+'/short_table.txt')
if runs == (1,1):
    print('Results saved to', out_dir + '/short_table.txt')
else:
    long_table_str = list2file(long_table, out_dir+'/long_table.txt')
    print('Average results saved to', out_dir + '/short_table.txt\n'
          'Detailed results for every run saved to', out_dir + '/long_table.txt')
2018-08-14 19:22:13 UTC :: finished, elapsed 6.4 hours
Results saved to /home/obaskov/language-learning/output/Child-Directed-Speech-2018-08-14/short_table.txt