Grammar rules generalization 2018-11-23_

Link Grammar 5.4.4, test_grammar updated 2018-10-19.
This notebook is shared as static Grammar-Rules-Generalization-2018-11-23_.html
Test resutls table is saved as table.txt in clustering_2018/Grammar-Rules-Generalization-2018-11-23_ folder, output data -- in the relevant subfolders of the folder.

Basic settings

In [1]:
import os, sys, time
from collections import OrderedDict, Counter
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
from src.grammar_learner.utl import UTC
from src.grammar_learner.read_files import check_dir
from src.grammar_learner.write_files import list2file
from src.grammar_learner.widgets import html_table
from src.grammar_learner.pqa_table import table_rows, wide_rows
tmpath = module_path + '/tmp/'
check_dir(tmpath, True, 'none')
table = []
start = time.time()
print(UTC(), ':: module_path =', module_path)
2018-11-23 17:56:44 UTC :: module_path = /home/obaskov/94/language-learning

Corpus test settings

In [2]:
out_dir = module_path + '/output/Grammar-Rules-Generalization-' + str(UTC())[:10] + '_'
corpus = 'CDS-br-text'  # 'CDS-caps-br-text' shortened -- 2018-11-22
dataset = 'LG-E-clean'  # 2018-10-29: only 100% parsed, shorter names: 
# dataset = 'LG-English'
# dataset = 'MST_6:R+1:R' # shorter name in 'CDS-br-text'
lines = [[0, corpus, dataset, 0, 0, 'none'], 
         [1, corpus, dataset, 0, 0, 'rules'],
         [2, corpus, dataset, 0, 0, 'updated'],
         [3, corpus, dataset, 0, 0, 'new']] 
rp = module_path + '/data/CDS-br-text/LG-E-clean'  # "clean-clean" renamed
cp = rp  # corpus path = reference_path :: use 'gold' parses as test corpus
runs = (1,1)
kwargs = {
    'left_wall'     :   ''          ,
    'period'        :   False       ,
    'context'       :   2           ,
    'word_space'    :   'sparse'    ,
    'clustering'    :   ['agglomerative', 'ward', 'euclidean'],
    'cluster_range' :   400         ,
    'clustering_metric' : ['silhouette', 'cosine'],
    'grammar_rules' :   2           ,
    'rules_merge'       :   0.8     , # grammar rules merge threshold
    'rules_aggregation' :   0.2     , # grammar rules aggregation threshold
    'top_level'         :   0.01    , # top-level rules generalization threshold
    'tmpath'        :   tmpath      , 
    'verbose'       :   'min'       ,
    'template_path' :   'poc-turtle',
    'linkage_limit' :   1000        }

1 line fast test

In [3]:
%%capture
kwargs['clustering'] = ['agglomerative', 'ward', 'euclidean']
kwargs['cluster_range'] = 400
kwargs['rules_aggregation'] = 0.1  # default 0.2
a0, _, header, log0, rules0 = wide_rows([lines[0], lines[3]], out_dir, cp, rp, runs, **kwargs)
display(html_table([header] + a0))
In [4]:
display(html_table([header] + a0))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
0CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1198---0.099%89%0.90[364, 359, 30, 11, 9]

3 versions of gramar rules generalization:

3 levels of generalization:

  1. Top-level hierarchy of "abstract" categories joining multipble grammar rules (categories)
  2. Grammar categories forming Ling Grammar rules, indexed AA...ZZ
  3. Sub-categories of grammar categories.

Column "G12n" (Generalization) describes levels 2,3 agglomeration:

  • none -- no generalization,
  • rules -- legacy version of jaccard index based generalization (~June 2018),
  • updated -- enhanced hierarchical generalization,
  • new -- fast iterative generalization providing almost flat sub-category level.

Linkage = "ward", affinity = "euclidean" (the only choice)

In [5]:
%%capture
kwargs['clustering'] = ['agglomerative', 'ward', 'euclidean']
kwargs['rules_aggregation'] = 0.1  # default 0.2
a1, _, header, log1, rules1 = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
In [6]:
display(html_table([header] + a1))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
0CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.1249---0.099%98%0.98[359, 51, 30, 28, 27]
2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.1215---0.099%97%0.98[359, 48, 39, 30, 25]
3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1198---0.099%89%0.90[364, 359, 30, 11, 9]

"Complete" linkage, "manhattan" affinity

In [7]:
%%capture
kwargs['clustering'] = ['agglomerative', 'complete', 'manhattan']
a2, _, header, log2, rules2 = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
In [8]:
display(html_table([header] + a2))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
0CDS-br-textLG-E-cleandALCMdcompletemanhattannone---400---0.099%96%0.97[466, 37, 11, 9, 8]
1CDS-br-textLG-E-cleandALCMdcompletemanhattanrules0.1252---0.099%97%0.98[466, 37, 34, 23, 17]
2CDS-br-textLG-E-cleandALCMdcompletemanhattanupdated0.1218---0.099%97%0.98[466, 37, 33, 31, 23]
3CDS-br-textLG-E-cleandALCMdcompletemanhattannew0.1194---0.099%89%0.90[466, 283, 37, 11, 9]

"Complete" linkage, "cosine" affinity

In [9]:
%%capture
kwargs['rules_aggregation'] = 0.1
kwargs['clustering'] = ['agglomerative', 'complete', 'cosine']
a3, _, header, log3, rules3 = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
In [10]:
display(html_table([header] + a3))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
0CDS-br-textLG-E-cleandALCCdcompletecosinenone---400---0.099%89%0.90[124, 53, 43, 21, 14]
1CDS-br-textLG-E-cleandALCCdcompletecosinerules0.1244---0.099%85%0.86[338, 124, 21, 17, 14]
2CDS-br-textLG-E-cleandALCCdcompletecosineupdated0.1207---0.099%83%0.84[338, 124, 67, 26, 20]
3CDS-br-textLG-E-cleandALCCdcompletecosinenew0.1370---0.099%89%0.90[124, 59, 53, 29, 21]

"Average" linkage, "cosine" affinity -- similarities less 0.1

In [11]:
%%capture
kwargs['rules_aggregation'] = 0.05   # no generalization wiht 0.1 similarity!
kwargs['clustering'] = ['agglomerative', 'average', 'cosine']
a4, _, header, log4, rules4 = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
In [12]:
display(html_table([header] + a4))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
0CDS-br-textLG-E-cleandALACdaveragecosinenone---400---0.099%97%0.98[98, 95, 25, 24, 17]
1CDS-br-textLG-E-cleandALACdaveragecosinerules0.05291---0.099%83%0.84[476, 54, 26, 15, 12]
2CDS-br-textLG-E-cleandALACdaveragecosineupdated0.05283---0.099%83%0.84[476, 56, 26, 21, 15]
3CDS-br-textLG-E-cleandALACdaveragecosinenew0.05359---0.099%89%0.89[315, 98, 58, 24, 13]

Varying rules aggregation parameters

Rules generalization merge threshold = 0.1

In [13]:
%%capture
kwargs['rules_aggregation'] = 0.1
t1 = []
n = 0
for linkage in ['ward', 'complete', 'average']:
    n += 1
    m = 0
    for affinity in ['euclidean', 'manhattan', 'cosine']:
        if linkage == 'ward' and affinity != 'euclidean': continue
        # m += 1
        # lines[0][0] = round(n + 0.1*m, 1)
        lines[0][0] = ''
        m += 1
        lines[1][0] = round(n + 0.1*m, 1)
        m += 1
        lines[2][0] = round(n + 0.1*m, 1)
        m += 1
        lines[3][0] = round(n + 0.1*m, 1)
        kwargs['clustering'] = ['agglomerative', linkage, affinity]
        a, _, header, log, _ = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
        t1.extend(a)
        table.extend(a)
In [14]:
display(html_table([header] + t1))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
1.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.1249---0.099%98%0.98[359, 51, 30, 28, 27]
1.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.1215---0.099%97%0.98[359, 48, 39, 30, 25]
1.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1198---0.099%89%0.90[364, 359, 30, 11, 9]
CDS-br-textLG-E-cleandALCEdcompleteeuclideannone---400---0.099%96%0.97[466, 37, 11, 9, 8]
2.1CDS-br-textLG-E-cleandALCEdcompleteeuclideanrules0.1252---0.099%97%0.98[466, 37, 34, 23, 17]
2.2CDS-br-textLG-E-cleandALCEdcompleteeuclideanupdated0.1218---0.099%97%0.98[466, 37, 33, 31, 23]
2.3CDS-br-textLG-E-cleandALCEdcompleteeuclideannew0.1194---0.099%89%0.90[466, 283, 37, 11, 9]
CDS-br-textLG-E-cleandALCMdcompletemanhattannone---400---0.099%96%0.97[466, 37, 11, 9, 8]
2.4CDS-br-textLG-E-cleandALCMdcompletemanhattanrules0.1252---0.099%97%0.98[466, 37, 34, 23, 17]
2.5CDS-br-textLG-E-cleandALCMdcompletemanhattanupdated0.1218---0.099%97%0.98[466, 37, 33, 31, 23]
2.6CDS-br-textLG-E-cleandALCMdcompletemanhattannew0.1194---0.099%89%0.90[466, 283, 37, 11, 9]
CDS-br-textLG-E-cleandALCCdcompletecosinenone---400---0.099%89%0.90[124, 53, 43, 21, 14]
2.7CDS-br-textLG-E-cleandALCCdcompletecosinerules0.1244---0.099%85%0.86[338, 124, 21, 17, 14]
2.8CDS-br-textLG-E-cleandALCCdcompletecosineupdated0.1207---0.099%83%0.84[338, 124, 67, 26, 20]
2.9CDS-br-textLG-E-cleandALCCdcompletecosinenew0.1370---0.099%89%0.90[124, 59, 53, 29, 21]
CDS-br-textLG-E-cleandALAEdaverageeuclideannone---400---0.099%96%0.97[627, 3, 2, 1, 0]
3.1CDS-br-textLG-E-cleandALAEdaverageeuclideanrules0.1247---0.099%97%0.98[627, 20, 16, 11, 9]
3.2CDS-br-textLG-E-cleandALAEdaverageeuclideanupdated0.1197---0.099%94%0.95[643, 46, 35, 11, 10]
3.3CDS-br-textLG-E-cleandALAEdaverageeuclideannew0.1192---0.099%89%0.90[627, 202, 4, 3, 2]
CDS-br-textLG-E-cleandALAMdaveragemanhattannone---400---0.099%96%0.98[626, 3, 2, 1, 0]
3.4CDS-br-textLG-E-cleandALAMdaveragemanhattanrules0.1247---0.099%97%0.98[626, 20, 17, 11, 9]
3.5CDS-br-textLG-E-cleandALAMdaveragemanhattanupdated0.1192---0.099%97%0.97[638, 48, 46, 9, 7]
3.6CDS-br-textLG-E-cleandALAMdaveragemanhattannew0.1193---0.099%89%0.90[626, 201, 4, 3, 2]
CDS-br-textLG-E-cleandALACdaveragecosinenone---400---0.099%97%0.98[98, 95, 25, 24, 17]
3.7CDS-br-textLG-E-cleandALACdaveragecosinerules0.1400---0.099%97%0.97[98, 95, 25, 24, 17]
3.8CDS-br-textLG-E-cleandALACdaveragecosineupdated0.1400---0.099%97%0.97[98, 95, 25, 24, 17]
3.9CDS-br-textLG-E-cleandALACdaveragecosinenew0.1400---0.099%97%0.97[98, 95, 25, 24, 17]

Rules generalization merge threshold = 0.05

In [ ]:
%%capture
kwargs['rules_aggregation'] = 0.05
t2 = []
# n = 0
for linkage in ['ward', 'complete', 'average']:
    n += 1
    m = 0
    for affinity in ['euclidean', 'manhattan', 'cosine']:
        if linkage == 'ward' and affinity != 'euclidean': continue
        # m += 1
        # lines[0][0] = round(n + 0.1*m, 1)
        lines[0][0] = ''
        m += 1
        lines[1][0] = round(n + 0.1*m, 1)
        m += 1
        lines[2][0] = round(n + 0.1*m, 1)
        m += 1
        lines[3][0] = round(n + 0.1*m, 1)
        kwargs['clustering'] = ['agglomerative', linkage, affinity]
        a, _, header, log, _ = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
        t2.extend(a)
        table.extend(a)
In [16]:
display(html_table([header] + t2))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
4.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.05149---0.099%90%0.91[413, 193, 32, 30, 26]
4.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.0568---0.099%80%0.81[672, 174, 21, 19, 14]
4.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.0581---0.099%76%0.77[585, 359, 6, 2, 1]

Start with 200 clusters, merge threshold = 0.1

In [19]:
%%capture
kwargs['cluster_range'] = 200
kwargs['rules_aggregation'] = 0.1
t4 = []
# n = 0
for linkage in ['ward', 'complete', 'average']:
    n += 1
    m = 0
    for affinity in ['euclidean', 'manhattan', 'cosine']:
        if linkage == 'ward' and affinity != 'euclidean': continue
        lines[0][0] = ''
        m += 1
        lines[1][0] = round(n + 0.1*m, 1)
        m += 1
        lines[2][0] = round(n + 0.1*m, 1)
        m += 1
        lines[3][0] = round(n + 0.1*m, 1)
        kwargs['clustering'] = ['agglomerative', linkage, affinity]
        a, _, header, log, _ = wide_rows(lines, out_dir, cp, rp, runs, **kwargs)
        t4.extend(a)
        table.extend(a)
In [20]:
display(html_table([header] + t4))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
CDS-br-textLG-E-cleandALWEdwardeuclideannone---200---0.099%98%0.98[569, 34, 32, 26, 20]
7.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.1128---0.099%96%0.96[569, 38, 34, 32, 27]
7.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.1108---0.099%95%0.95[569, 104, 57, 34, 32]
7.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1130---0.099%96%0.96[569, 66, 34, 32, 26]
CDS-br-textLG-E-cleandALCEdcompleteeuclideannone---200---0.099%96%0.97[817, 4, 3, 2, 1]
8.1CDS-br-textLG-E-cleandALCEdcompleteeuclideanrules0.1124---0.099%93%0.93[817, 38, 17, 7, 6]
8.2CDS-br-textLG-E-cleandALCEdcompleteeuclideanupdated0.1102---0.099%91%0.92[817, 58, 36, 6, 5]
8.3CDS-br-textLG-E-cleandALCEdcompleteeuclideannew0.1131---0.099%92%0.93[817, 33, 24, 11, 4]
CDS-br-textLG-E-cleandALCMdcompletemanhattannone---200---0.099%96%0.97[817, 4, 3, 2, 1]
8.4CDS-br-textLG-E-cleandALCMdcompletemanhattanrules0.1124---0.099%93%0.93[817, 38, 17, 7, 6]
8.5CDS-br-textLG-E-cleandALCMdcompletemanhattanupdated0.1102---0.099%91%0.92[817, 58, 36, 6, 5]
8.6CDS-br-textLG-E-cleandALCMdcompletemanhattannew0.1131---0.099%92%0.93[817, 33, 24, 11, 4]
CDS-br-textLG-E-cleandALCCdcompletecosinenone---200---0.099%79%0.80[452, 53, 43, 21, 14]
8.7CDS-br-textLG-E-cleandALCCdcompletecosinerules0.169---0.099%73%0.74[859, 43, 37, 5, 4]
8.8CDS-br-textLG-E-cleandALCCdcompletecosineupdated0.141---0.099%71%0.72[967, 20, 3, 2, 1]
8.9CDS-br-textLG-E-cleandALCCdcompletecosinenew0.1188---0.099%78%0.79[452, 53, 43, 21, 14]
CDS-br-textLG-E-cleandALAEdaverageeuclideannone---200---0.099%96%0.97[835, 1, 0]
9.1CDS-br-textLG-E-cleandALAEdaverageeuclideanrules0.1121---0.099%93%0.93[835, 39, 12, 5, 4]
9.2CDS-br-textLG-E-cleandALAEdaverageeuclideanupdated0.1108---0.099%93%0.94[835, 23, 21, 9, 7]
9.3CDS-br-textLG-E-cleandALAEdaverageeuclideannew0.1126---0.099%93%0.93[835, 30, 24, 8, 6]
CDS-br-textLG-E-cleandALAMdaveragemanhattannone---200---0.099%96%0.97[835, 1, 0]
9.4CDS-br-textLG-E-cleandALAMdaveragemanhattanrules0.1121---0.099%93%0.93[835, 39, 12, 5, 4]
9.5CDS-br-textLG-E-cleandALAMdaveragemanhattanupdated0.1108---0.099%93%0.94[835, 23, 21, 9, 7]
9.6CDS-br-textLG-E-cleandALAMdaveragemanhattannew0.1126---0.099%93%0.93[835, 30, 24, 8, 6]
CDS-br-textLG-E-cleandALACdaveragecosinenone---200---0.099%72%0.73[821, 10, 2, 1, 0]
9.7CDS-br-textLG-E-cleandALACdaveragecosinerules0.1200---0.099%73%0.74[821, 10, 2, 1, 0]
9.8CDS-br-textLG-E-cleandALACdaveragecosineupdated0.1200---0.099%73%0.74[821, 10, 2, 1, 0]
9.9CDS-br-textLG-E-cleandALACdaveragecosinenew0.1200---0.099%73%0.74[821, 10, 2, 1, 0]

All tests

In [21]:
display(html_table([header] + table))
LineCorpusParsingSpaceLinkageAffinityG12nThresholdRulesNNSIPAPQF1Top 5 cluster sizes
CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
1.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.1249---0.099%98%0.98[359, 51, 30, 28, 27]
1.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.1215---0.099%97%0.98[359, 48, 39, 30, 25]
1.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1198---0.099%89%0.90[364, 359, 30, 11, 9]
CDS-br-textLG-E-cleandALCEdcompleteeuclideannone---400---0.099%96%0.97[466, 37, 11, 9, 8]
2.1CDS-br-textLG-E-cleandALCEdcompleteeuclideanrules0.1252---0.099%97%0.98[466, 37, 34, 23, 17]
2.2CDS-br-textLG-E-cleandALCEdcompleteeuclideanupdated0.1218---0.099%97%0.98[466, 37, 33, 31, 23]
2.3CDS-br-textLG-E-cleandALCEdcompleteeuclideannew0.1194---0.099%89%0.90[466, 283, 37, 11, 9]
CDS-br-textLG-E-cleandALCMdcompletemanhattannone---400---0.099%96%0.97[466, 37, 11, 9, 8]
2.4CDS-br-textLG-E-cleandALCMdcompletemanhattanrules0.1252---0.099%97%0.98[466, 37, 34, 23, 17]
2.5CDS-br-textLG-E-cleandALCMdcompletemanhattanupdated0.1218---0.099%97%0.98[466, 37, 33, 31, 23]
2.6CDS-br-textLG-E-cleandALCMdcompletemanhattannew0.1194---0.099%89%0.90[466, 283, 37, 11, 9]
CDS-br-textLG-E-cleandALCCdcompletecosinenone---400---0.099%89%0.90[124, 53, 43, 21, 14]
2.7CDS-br-textLG-E-cleandALCCdcompletecosinerules0.1244---0.099%85%0.86[338, 124, 21, 17, 14]
2.8CDS-br-textLG-E-cleandALCCdcompletecosineupdated0.1207---0.099%83%0.84[338, 124, 67, 26, 20]
2.9CDS-br-textLG-E-cleandALCCdcompletecosinenew0.1370---0.099%89%0.90[124, 59, 53, 29, 21]
CDS-br-textLG-E-cleandALAEdaverageeuclideannone---400---0.099%96%0.97[627, 3, 2, 1, 0]
3.1CDS-br-textLG-E-cleandALAEdaverageeuclideanrules0.1247---0.099%97%0.98[627, 20, 16, 11, 9]
3.2CDS-br-textLG-E-cleandALAEdaverageeuclideanupdated0.1197---0.099%94%0.95[643, 46, 35, 11, 10]
3.3CDS-br-textLG-E-cleandALAEdaverageeuclideannew0.1192---0.099%89%0.90[627, 202, 4, 3, 2]
CDS-br-textLG-E-cleandALAMdaveragemanhattannone---400---0.099%96%0.98[626, 3, 2, 1, 0]
3.4CDS-br-textLG-E-cleandALAMdaveragemanhattanrules0.1247---0.099%97%0.98[626, 20, 17, 11, 9]
3.5CDS-br-textLG-E-cleandALAMdaveragemanhattanupdated0.1192---0.099%97%0.97[638, 48, 46, 9, 7]
3.6CDS-br-textLG-E-cleandALAMdaveragemanhattannew0.1193---0.099%89%0.90[626, 201, 4, 3, 2]
CDS-br-textLG-E-cleandALACdaveragecosinenone---400---0.099%97%0.98[98, 95, 25, 24, 17]
3.7CDS-br-textLG-E-cleandALACdaveragecosinerules0.1400---0.099%97%0.97[98, 95, 25, 24, 17]
3.8CDS-br-textLG-E-cleandALACdaveragecosineupdated0.1400---0.099%97%0.97[98, 95, 25, 24, 17]
3.9CDS-br-textLG-E-cleandALACdaveragecosinenew0.1400---0.099%97%0.97[98, 95, 25, 24, 17]
CDS-br-textLG-E-cleandALWEdwardeuclideannone---400---0.099%96%0.97[359, 30, 25, 14, 11]
4.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.05149---0.099%90%0.91[413, 193, 32, 30, 26]
4.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.0568---0.099%80%0.81[672, 174, 21, 19, 14]
4.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.0581---0.099%76%0.77[585, 359, 6, 2, 1]
CDS-br-textLG-E-cleandALWEdwardeuclideannone---200---0.099%98%0.98[569, 34, 32, 26, 20]
7.1CDS-br-textLG-E-cleandALWEdwardeuclideanrules0.1128---0.099%96%0.96[569, 38, 34, 32, 27]
7.2CDS-br-textLG-E-cleandALWEdwardeuclideanupdated0.1108---0.099%95%0.95[569, 104, 57, 34, 32]
7.3CDS-br-textLG-E-cleandALWEdwardeuclideannew0.1130---0.099%96%0.96[569, 66, 34, 32, 26]
CDS-br-textLG-E-cleandALCEdcompleteeuclideannone---200---0.099%96%0.97[817, 4, 3, 2, 1]
8.1CDS-br-textLG-E-cleandALCEdcompleteeuclideanrules0.1124---0.099%93%0.93[817, 38, 17, 7, 6]
8.2CDS-br-textLG-E-cleandALCEdcompleteeuclideanupdated0.1102---0.099%91%0.92[817, 58, 36, 6, 5]
8.3CDS-br-textLG-E-cleandALCEdcompleteeuclideannew0.1131---0.099%92%0.93[817, 33, 24, 11, 4]
CDS-br-textLG-E-cleandALCMdcompletemanhattannone---200---0.099%96%0.97[817, 4, 3, 2, 1]
8.4CDS-br-textLG-E-cleandALCMdcompletemanhattanrules0.1124---0.099%93%0.93[817, 38, 17, 7, 6]
8.5CDS-br-textLG-E-cleandALCMdcompletemanhattanupdated0.1102---0.099%91%0.92[817, 58, 36, 6, 5]
8.6CDS-br-textLG-E-cleandALCMdcompletemanhattannew0.1131---0.099%92%0.93[817, 33, 24, 11, 4]
CDS-br-textLG-E-cleandALCCdcompletecosinenone---200---0.099%79%0.80[452, 53, 43, 21, 14]
8.7CDS-br-textLG-E-cleandALCCdcompletecosinerules0.169---0.099%73%0.74[859, 43, 37, 5, 4]
8.8CDS-br-textLG-E-cleandALCCdcompletecosineupdated0.141---0.099%71%0.72[967, 20, 3, 2, 1]
8.9CDS-br-textLG-E-cleandALCCdcompletecosinenew0.1188---0.099%78%0.79[452, 53, 43, 21, 14]
CDS-br-textLG-E-cleandALAEdaverageeuclideannone---200---0.099%96%0.97[835, 1, 0]
9.1CDS-br-textLG-E-cleandALAEdaverageeuclideanrules0.1121---0.099%93%0.93[835, 39, 12, 5, 4]
9.2CDS-br-textLG-E-cleandALAEdaverageeuclideanupdated0.1108---0.099%93%0.94[835, 23, 21, 9, 7]
9.3CDS-br-textLG-E-cleandALAEdaverageeuclideannew0.1126---0.099%93%0.93[835, 30, 24, 8, 6]
CDS-br-textLG-E-cleandALAMdaveragemanhattannone---200---0.099%96%0.97[835, 1, 0]
9.4CDS-br-textLG-E-cleandALAMdaveragemanhattanrules0.1121---0.099%93%0.93[835, 39, 12, 5, 4]
9.5CDS-br-textLG-E-cleandALAMdaveragemanhattanupdated0.1108---0.099%93%0.94[835, 23, 21, 9, 7]
9.6CDS-br-textLG-E-cleandALAMdaveragemanhattannew0.1126---0.099%93%0.93[835, 30, 24, 8, 6]
CDS-br-textLG-E-cleandALACdaveragecosinenone---200---0.099%72%0.73[821, 10, 2, 1, 0]
9.7CDS-br-textLG-E-cleandALACdaveragecosinerules0.1200---0.099%73%0.74[821, 10, 2, 1, 0]
9.8CDS-br-textLG-E-cleandALACdaveragecosineupdated0.1200---0.099%73%0.74[821, 10, 2, 1, 0]
9.9CDS-br-textLG-E-cleandALACdaveragecosinenew0.1200---0.099%73%0.74[821, 10, 2, 1, 0]
In [22]:
print(UTC(), ':: finished, elapsed', str(round((time.time()-start)/3600.0, 1)), 'hours')
table_str = list2file(table, out_dir + '/table.txt')
print('Results saved to', out_dir + '/table.txt')
2018-11-23 18:23:20 UTC :: finished, elapsed 0.4 hours
Results saved to /home/obaskov/94/language-learning/output/Grammar-Rules-Generalization-2018-11-23_/table.txt