POC-Turtle tests 2018-08-05

Updated optimal clustering search (2018-08-05), server 88.99.210.144
Each line is calculated 1x, parsing metrics tested 1x for each calculation.
The calculation table is shared as 'short_table.txt' in data folder
http://langlearn.singularitynet.io/data/clustering_2018/POC-Turtle-2018-08-05/
This notebook is shared as static html via
http://langlearn.singularitynet.io/data/clustering_2018/html/POC-Turtle-2018-08-05.html
Results consistency test is shared via
http://langlearn.singularitynet.io/data/clustering_2018/html/POC-Turtle-2018-08-05-multi.html

Basic settings

In [1]:
import os, sys, time
from IPython.display import display
import matplotlib.pyplot as plt
%matplotlib inline
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: 
    sys.path.append(module_path)
grammar_learner_path = module_path + '/src/grammar_learner/'
if os.path.exists(grammar_learner_path) and grammar_learner_path not in sys.path: 
    sys.path.append(grammar_learner_path)
from utl import UTC
from read_files import check_dir
from widgets import html_table
from pqa_table import table_cds
tmpath = module_path + '/tmp/'
if check_dir(tmpath, True, 'none'):
    table = []
    long_table = []
    header = ['Line','Corpus','Parsing','LW','"."','Generalization','Space','Rules','PA','PQ']
    start = time.time()
    print(UTC(), ':: module_path =', module_path)
else: print(UTC(), ':: could not create temporary files directory', tmpath)
2018-08-05 07:54:14 UTC :: module_path = /home/obaskov/language-learning

Grammar Learner corpus-specific parameters

In [2]:
corpus = 'POC-Turtle'
out_dir = module_path + '/output/'+ corpus + '-' + str(UTC())[:10]
runs = (1,1)    # (attempts to learn grammar per line, grammar tests per attempt)
if runs != (1,1): out_dir += '-multi'
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'cluster_range' :   (2,50,9)    ,   # see comments below
    'clustering'    :   ('kmeans', 'kmeans++', 18),
    'tmpath'        :   tmpath      , 
    'verbose'       :   'min'       ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000         ,
    'categories_generalization': 'off' }
lines = [
    [4, 'POC-Turtle'    , 'MST-fixed-manually'              ,'LW','.', 'none'  ],
    [5, 'POC-Turtle'    , 'MST-fixed-manually'              , 0  , 0 , 'none'  ],
    [6, 'POC-Turtle'    , 'R=6-Weight=6:R-mst-weight=+1:R'  ,'LW','.', 'none'  ],
    [7, 'POC-Turtle'    , 'R=6-Weight=6:R-mst-weight=+1:R'  , 0  , 0 , 'none'  ],
    [8, 'POC-Turtle'    , 'R=6-Weight=1-no-mst-weighting'   ,'LW','.', 'none'  ], 
    [9, 'POC-Turtle'    , 'R=6-Weight=1-no-mst-weighting'   , 0  , 0 , 'none'  ],
    [10, 'POC-Turtle'   , 'LG-ANY-all-parses'               ,'LW','.', 'none'  ],
    [11, 'POC-Turtle'   , 'LG-ANY-all-parses'               , 0  , 0 , 'none'  ]]
# cp,rp :: (test) corpus_path and reference_path
cp = module_path + '/data/POC-Turtle/poc-turtle-corpus.txt'
rp = module_path + '/data/POC-Turtle/MST-fixed-manually/poc-turtle-parses-gold.txt'

Comments on new 2018-08-04 optimal clustering search algorithm:
'cluster_range': (2,50,9): min, max, proof
proof: number of top results in silhouette index for the given number of clusters necessary to prove the best choice.
'clustering': ('kmeans','kmeans++',18): algorithm, initialization, number of initializations
proof = 9 and n_init=18 sound awkward, but was necessary to provide reproducibility for line 10 'POC-Turtle','LG-ANY-all-parses','LW','.', 'none' connectors-DRK-connectors settings.

ULL Project Plan ⇒ Parses ⇒ lines 5-11

Connectors-DRK-Connectors

In [3]:
%%capture
kwargs['context'] = 1
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = ('kmeans','kmeans++',18)
kwargs['cluster_range'] = (2,50,9)
kwargs['grammar_rules'] = 1
average21, long21 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average21)
long_table.extend(long21)
In [4]:
display(html_table([header]+average21))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
4POC-TurtleMST-fixed-manuallyLW + nonecDRKc6100%100%
5POC-TurtleMST-fixed-manually -- -- nonecDRKc4100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonecDRKc6100%100%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKc4100%100%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonecDRKc767%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonecDRKc767%0%
10POC-TurtleLG-ANY-all-parsesLW + nonecDRKc8100%100%
11POC-TurtleLG-ANY-all-parses -- -- nonecDRKc797%92%

Connectors-DRK-Disjuncts

In [5]:
%%capture
kwargs['context'] = 1
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = ('kmeans','kmeans++',18)
kwargs['cluster_range'] = (2,50,9)
kwargs['grammar_rules'] = 2
average22, long22 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average22)
long_table.extend(long22)
In [6]:
display(html_table([header]+average22))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
4POC-TurtleMST-fixed-manuallyLW + nonecDRKd6100%100%
5POC-TurtleMST-fixed-manually -- -- nonecDRKd4100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonecDRKd683%83%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKd483%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonecDRKd744%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonecDRKd70%0%
10POC-TurtleLG-ANY-all-parsesLW + nonecDRKd892%92%
11POC-TurtleLG-ANY-all-parses -- -- nonecDRKd792%92%

Disjuncts-DRK-Disjuncts

In [7]:
%%capture
kwargs['context'] = 2
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = ('kmeans','kmeans++',18)
kwargs['cluster_range'] = (2,50,9)
kwargs['grammar_rules'] = 2
average23, long23 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average23)
long_table.extend(long23)
In [8]:
display(html_table([header]+average23))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
4POC-TurtleMST-fixed-manuallyLW + nonedDRKd8100%100%
5POC-TurtleMST-fixed-manually -- -- nonedDRKd4100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonedDRKd00%0%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonedDRKd783%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonedDRKd00%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonedDRKd00%0%
10POC-TurtleLG-ANY-all-parsesLW + nonedDRKd697%96%
11POC-TurtleLG-ANY-all-parses -- -- nonedDRKd792%92%

Disjuncts-ILE-Disjuncts

In [9]:
%%capture
kwargs['context'] = 2
kwargs['word_space'] = 'discrete'
kwargs['clustering'] = 'group'
kwargs['grammar_rules'] = 2
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
In [10]:
display(html_table([header]+average24))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
4POC-TurtleMST-fixed-manuallyLW + nonedILEd8100%100%
5POC-TurtleMST-fixed-manually -- -- nonedILEd6100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonedILEd1083%83%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonedILEd883%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonedILEd130%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonedILEd110%0%
10POC-TurtleLG-ANY-all-parsesLW + nonedILEd1092%92%
11POC-TurtleLG-ANY-all-parses -- -- nonedILEd892%92%

All tests (all entries for multi-test runs > (1.1))

In [11]:
display(html_table([header]+long_table))
LineCorpusParsingLW"."GeneralizationSpaceRulesPAPQ
4POC-TurtleMST-fixed-manuallyLW + nonecDRKc 6 100%100%
5POC-TurtleMST-fixed-manually -- -- nonecDRKc 4 100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonecDRKc 6 100%100%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKc 4 100%100%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonecDRKc 7 67%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonecDRKc 7 67%0%
10POC-TurtleLG-ANY-all-parsesLW + nonecDRKc 8 100%100%
11POC-TurtleLG-ANY-all-parses -- -- nonecDRKc 7 97%92%
4POC-TurtleMST-fixed-manuallyLW + nonecDRKd 6 100%100%
5POC-TurtleMST-fixed-manually -- -- nonecDRKd 4 100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonecDRKd 6 83%83%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonecDRKd 4 83%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonecDRKd 7 44%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonecDRKd 7 0%0%
10POC-TurtleLG-ANY-all-parsesLW + nonecDRKd 8 92%92%
11POC-TurtleLG-ANY-all-parses -- -- nonecDRKd 7 92%92%
4POC-TurtleMST-fixed-manuallyLW + nonedDRKd 8 100%100%
5POC-TurtleMST-fixed-manually -- -- nonedDRKd 4 100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonedDRKd fail ------
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonedDRKd 7 83%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonedDRKd fail ------
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonedDRKd fail ------
10POC-TurtleLG-ANY-all-parsesLW + nonedDRKd 6 97%96%
11POC-TurtleLG-ANY-all-parses -- -- nonedDRKd 7 92%92%
4POC-TurtleMST-fixed-manuallyLW + nonedILEd 8 100%100%
5POC-TurtleMST-fixed-manually -- -- nonedILEd 6 100%100%
6POC-TurtleR=6-Weight=6:R-mst-weight=+1:RLW + nonedILEd 10 83%83%
7POC-TurtleR=6-Weight=6:R-mst-weight=+1:R -- -- nonedILEd 8 83%83%
8POC-TurtleR=6-Weight=1-no-mst-weightingLW + nonedILEd 13 0%0%
9POC-TurtleR=6-Weight=1-no-mst-weighting -- -- nonedILEd 11 0%0%
10POC-TurtleLG-ANY-all-parsesLW + nonedILEd 10 92%92%
11POC-TurtleLG-ANY-all-parses -- -- nonedILEd 8 92%92%
In [12]:
from write_files import list2file
print(UTC(), ':: finished, elapsed', str(round((time.time()-start)/60, 1)), 'min')
table_str = list2file(table, out_dir+'/short_table.txt')
if runs == (1,1):
    print('Results saved to', out_dir + '/short_table.txt')
else:
    long_table_str = list2file(long_table, out_dir+'/long_table.txt')
    print('Average results saved to', out_dir + '/short_table.txt\n'
          'Detailed results for every run saved to', out_dir + '/long_table.txt')
2018-08-05 07:54:42 UTC :: finished, elapsed 0.5 min
Results saved to /home/obaskov/language-learning/output/POC-Turtle-2018-08-05/short_table.txt