How do you control text formatting when launching GAP scripts from the command line? - gap-system

I would like to understand GAP's behaviour when I launch a script from the command line, for example
$ gap mytest.gap
as opposed to calling it from inside GAP
gap> Read("mytest.gap");
In particular, I've tried to suppress automatic formatting with line breaks and indentation. If the file mytest.gap is the following
SetPrintFormattingStatus( "*stdout*", false );
Print( Primes{[1..30]}, "\n" );
then I get the expected behaviour when calling it with Read(), namely
[ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113 ]
whereas launching it from the command line, I still get
[ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71,
73, 79, 83, 89, 97, 101, 103, 107, 109, 113 ]
Can somebody please give an explanation for this behaviour? Is GAP's treatment of scripts launched from a command line invocation documented somewhere? I couldn't find it in the manual, but the man page does say usage: gap [OPTIONS] [FILES] with documentation only of how the options are treated.

I am afraid that it is currently not possible to completely disable the output formatting of Print the way you tried.
However, you can work around the problem by using the newer stream APIs and PrintTo, like this:
s:=OutputTextUser();
SetPrintFormattingStatus( s, false );
PrintTo( s, Primes{[1..30]}, "\n" );
I logged this as a bug in the GAP issue tracker, and perhaps we can fix it in the next release (or perhaps somebody will explain why it's "not a bug but a feature" ;-).

Related

Spacy \ Matcher \ set membership throws an exception

I use Spacy 2.7:
Following the Set Membership example:
I tried the below with the IN logic - simple list of words...
doc = nlp(SOME_TEXT)
matcher = Matcher(nlp.vocab)
pattern = [{'LOWER': {'IN' : ["i","you","we","they"]}}]
matcher.add("myPattern",None, pattern)
matches = matcher(doc)
...
Get an exception
would:[{'LOWER': {'IN': ['i', 'you', 'we', 'they']}}] with index 0
Traceback (most recent call last):
File "test.py", line 85, in <module>
matcher.add(key,None, curr)
File "matcher.pyx", line 266, in spacy.matcher.Matcher.add
File "matcher.pyx", line 99, in spacy.matcher.init_pattern
TypeError: an integer is required
Went to the open source file (matcher.pyx) - in line 99, not sure what is the bug, or maybe I used that incorrect..
Sorry if this was confusing – but the GitHub thread you're referring to is still only the spec and proposal, i.e. the planned implementation. The changes will hopefully ship with spaCy v2.1.0 (since some of the changes to the Matcher internals are not fully backwards compatible).

Pig: Illustrate error 2997

Below code is working fine and producing the results at the grunt (local mode) except the illustrate on last relation is giving the error 2997
/* Open Grunt in local mode pig -x local */
STOCK_A= LOAD '/media/sf_sand/NYSE_daily_prices_A.csv' USING PigStorage(',') AS (exchange:chararray,symbol:chararray,date:chararray,open:float,high:float,low:float,close:float,volume:int,adj_close:float);
describe STOCK_A;
illustrate STOCK_A;
b= LIMIT STOCK_A 100;
describe b;
illustrate b;
c= FOREACH b GENERATE *;
illustrate c is working
c= FOREACH b GENERATE symbol,date,close;
dump c; — working
Illustrate c is not working below is the error ( Error 2997 Encountered IO exception):
015-06-10 11:52:23,621 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2015-06-10 11:52:23,647 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-10 11:52:23,647 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[ConstantCalculator, LoadTypeCastInserter, PredicatePushdownOptimizer, StreamTypeCastInserter], RULES_DISABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter]}
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-06-10 11:52:23,650 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-06-10 11:52:23,651 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2015-06-10 11:52:23,651 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2015-06-10 11:52:23,658 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: /tmp/1433937143658-0
2015-06-10 11:52:23,667 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2015-06-10 11:52:23,669 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: STOCK_A[3,9] C: R:
2015-06-10 11:52:23,672 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2015-06-10 11:52:23,672 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2015-06-10 11:52:23,705 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2015-06-10 11:52:23,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-06-10 11:52:23,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-06-10 11:52:23,708 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
2015-06-10 11:52:23,709 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2015-06-10 11:52:23,723 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-06-10 11:52:23,727 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map - Aliases being processed per job phase (AliasName[line,offset]): M: STOCK_A[3,9],STOCK_A[-1,-1],c[8,3] C: R: b[4,3]
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 111, 112, 101, 110] in field being converted to float, caught NumberFormatException <For input string: "stock_price_open"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 104, 105, 103, 104] in field being converted to float, caught NumberFormatException <For input string: "stock_price_high"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 108, 111, 119] in field being converted to float, caught NumberFormatException <For input string: "stock_price_low"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException <For input string: "stock_price_close"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 118, 111, 108, 117, 109, 101] in field being converted to int, caught NumberFormatException <For input string: "stock_volume"> field discarded
2015-06-10 11:52:23,727 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 97, 100, 106, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException <For input string: "stock_price_adj_close"> field discarded
java.lang.ClassCastException
2015-06-10 11:52:23,727 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Encountered IOException. Exception
In the last line of your log, you have the following error :
Unable to interpret value [115, 116, 111, 99, 107, 95, 112, 114, 105, 99, 101, 95, 97, 100, 106, 95, 99, 108, 111, 115, 101] in field being converted to float, caught NumberFormatException field discarded java.lang.ClassCastException 2015-06-10 11:52:23,727 [main]
Could you provide a sample of you csv file as I think event STOCK_A is not okay ?
You may also LIMIT the input to few lines, and show the results of DESCRIBE and DUMP on those lines.

Issues with bitbake for building Angstrom

The issue I'm having is that I'm trying to build an Angstrom image from scratch using bitbake (since Angstrom is now Yocto Compatible) but I've run into an error the moment I run the bitbake systemd-image
Traceback (most recent call last):
File "/usr/bin/bitbake", line 234, in <module>
ret = main()
File "/usr/bin/bitbake", line 197, in main
server = ProcessServer(server_channel, event_queue, configuration)
File "/usr/lib/pymodules/python2.7/bb/server/process.py", line 78, in __init__
self.cooker = BBCooker(configuration, self.register_idle_function)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 76, in __init__
self.parseConfigurationFiles(self.configuration.file)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 510, in parseConfigurationFiles
data = _parse(os.path.join("conf", "bitbake.conf"), data)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:build-${BUILD_OS}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}:forcevariable${#bb.utils.contains("TUNE_FEATURES", "thumb", ":thumb", "", d)}${#bb.utils.contains("TUNE_FEATURES", "no-thumb-interwork", ":thumb-interwork", "", d)}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 116, in expandWithRefs
s = __expand_var_regexp__.sub(varparse.var_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 60, in var_sub
var = self.d.getVar(key, 1)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 260, in getVar
return self.expand(value, var)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 132, in expand
return self.expandWithRefs(s, varname).value
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${#bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 76, in python_sub
value = utils.better_eval(codeobj, DataContext(self.d))
File "/usr/lib/pymodules/python2.7/bb/utils.py", line 387, in better_eval
return eval(source, _context, locals)
File "PN", line 1, in <module>
TypeError: getVar() takes exactly 3 arguments (2 given)
I've been at this for a while now, searching on different sites. Originally I tried following the guide at the developer section on the Angstrom site, but once I got some errors (prior to this one I'm putting here), I found Derek Molloy's site http://derekmolloy.ie/building-angstrom-for-beaglebone-from-source/ which solved those errors and gave a little more detail into the process.
Eventually I stumbled onto another forum post which decribed my problem, but unfortunately the answers weren't really clear (for me anyway) http://comments.gmane.org/gmane.linux.distributions.angstrom.devel/7431. I'm at a loss on what could be wrong, and I'm pretty much new to Yocto project so I'm unsure if there's any steps missing or something that's implicit that I have overlooked, so I would deeply appreciate anyone who could point me on the right direction on this.
As side note, I've been thinking that it could be something having to do with the environment-angstrom-... file that I have, since mine is environment-angstrom-v2013.12 and all the other examples use previous versions, I'm wondering if there's a new step involved when working with this.
Is there a reason why you are using a system-wide bitbake instead of the one that is compatible with that release of Angstrom?
Don't use a system-wide bitbake, as the bitbake API can and does change over time. Use the corresponding bitbake for that release of angstrom.
(This is breaking because your bitbake requires getVar to take three arguments but your angstrom layers are only passing two)

Elasticsearch Parse Exception error when attempting to index PDF

I'm just getting started with elasticsearch. Our requirement has us needing to index thousands of PDF files and I'm having a hard time getting just ONE of them to index successfully.
Installed the Attachment Type plugin and got response: Installed mapper-attachments.
Followed the Attachment Type in Action tutorial but the process hangs and I don't know how to interpret the error message. Also tried the gist which hangs in the same place.
$ curl -X POST "localhost:9200/test/attachment/" -d json.file
{"error":"ElasticSearchParseException[Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]]","status":400}
More details:
The json.file contains an embedded Base64 PDF file (as per instructions). The first line of the file appears correct (to me anyway): {"file":"JVBERi0xLjQNJeLjz9MNCjE1OCAwIG9iaiA8...
I'm not sure if maybe the json.file is invalid or if maybe elasticsearch just isn't set up to parse PDFs properly?!?
Encoding - Here's how we're encoding the PDF into json.file (as per tutorial):
coded=`cat fn6742.pdf | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
json="{\"file\":\"${coded}\"}"
echo "$json" > json.file
also tried:
coded=`openssl base64 -in fn6742.pdf
log:
[2012-06-07 12:32:16,742][DEBUG][action.index ] [Bailey, Paul] [test][0], node[AHLHFKBWSsuPnTIRVhNcuw], [P], s[STARTED]: Failed to execute [index {[test][attachment][DauMB-vtTIaYGyKD4P8Y_w], source[json.file]}]
org.elasticsearch.ElasticSearchParseException: Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
at org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:50)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:437)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:290)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:210)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Hoping someone can help me see what I'm missing or did wrong?
The following error points to the source of the problem.
Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
The UTF-8 codes [106, 115, 111, ...] show that you are trying to index string "json.file" instead of content of the file.
To index content of the file simply add letter "#" in front of the file name.
curl -X POST "localhost:9200/test/attachment/" -d #json.file
Turns out it's necessary to export ES_JAVA_OPTS=-Djava.awt.headless=true before running a java app on a 'headless' server... who would'a thought!?!

VARCHAR requires a length when rendered on MySQL

I have a buildout instance with pas.plugins.sqlalchemy. It appears in installation list but when installing it results in an error.
Here's the ZCML definition:
<configure xmlns="http://namespaces.zope.org/zope" xmlns:db="http://namespaces.zope.org/db">
<include package="z3c.saconfig" file="meta.zcml" />
<db:engine xmlns="http://namespaces.zope.org/db" name="pas" url="mysql://webadmin:password#rcs-mysql-dev/estep" />
<db:session xmlns="http://namespaces.zope.org/db" name="pas.plugins.sqlalchemy" engine="pas" />
</configure>
Traceback is:
Traceback (innermost last):
Module ZPublisher.Publish, line 127, in publish
Module ZPublisher.mapply, line 77, in mapply
Module ZPublisher.Publish, line 47, in call_object
Module Products.CMFQuickInstallerTool.QuickInstallerTool, line 575, in installProducts
Module Products.CMFQuickInstallerTool.QuickInstallerTool, line 512, in installProduct
- __traceback_info__: ('pas.plugins.sqlalchemy',)
Module Products.GenericSetup.tool, line 330, in runAllImportStepsFromProfile
- __traceback_info__: profile-pas.plugins.sqlalchemy:install
Module Products.GenericSetup.tool, line 1085, in _runImportStepsFromContext
Module Products.GenericSetup.tool, line 999, in _doRunImportStep
- __traceback_info__: pas.plugins.sqlalchemy.install
Module pas.plugins.sqlalchemy.setuphandlers, line 46, in install
Module sqlalchemy.schema, line 2148, in create_all
Module sqlalchemy.engine.base, line 1698, in create
Module sqlalchemy.engine.base, line 1740, in _run_visitor
Module sqlalchemy.sql.visitors, line 83, in traverse_single
Module sqlalchemy.engine.ddl, line 42, in visit_metadata
Module sqlalchemy.sql.visitors, line 83, in traverse_single
Module sqlalchemy.engine.ddl, line 58, in visit_table
Module sqlalchemy.engine.base, line 1191, in execute
Module sqlalchemy.engine.base, line 1241, in _execute_ddl
Module sqlalchemy.sql.expression, line 1413, in compile
Module sqlalchemy.engine.base, line 702, in compile
Module sqlalchemy.engine.base, line 715, in process
Module sqlalchemy.sql.visitors, line 54, in _compiler_dispatch
Module sqlalchemy.sql.compiler, line 1152, in visit_create_table
Module sqlalchemy.dialects.mysql.base, line 1282, in get_column_specification
Module sqlalchemy.engine.base, line 761, in process
Module sqlalchemy.sql.visitors, line 54, in _compiler_dispatch
Module sqlalchemy.sql.compiler, line 1450, in visit_string
Module sqlalchemy.dialects.mysql.base, line 1520, in visit_VARCHAR
InvalidRequestError: VARCHAR requires a length when rendered on MySQL
you need to specify lengths for all Strings. That means: String(n) instead of simply String. So, in pas.plugins.sqlalchemy.model.User
login = Column(String, unique=True)
becomes
login = Column(String(100), unique=True)
etc...
I don't have a MySQL installation, and pas.plugins.sqlalchemy works fine on postgresql for me, but it would seem that the authors have made an assumption about varchars. Assuming it's not something that SQLAlchemy should be handling itself (it would be really nice if the MySQL dialect for SQLalchemy would select an appropriate maximum size for unbounded varchars), I'll see if I can commit a fix this evening.
A quick glance at the code shows that all "String" (treated as varchar by the database) fields have maximum lengths except Login, name and password in the User table and name in the Group table, and there's no good reason why these should be different.
Update: Check out https://svn.plone.org/svn/collective/PASPlugins/pas.plugins.sqlalchemy/branches/auspex from subversion. It's my version of pas.plugins.sqlalchemy with support for the IGroupCapability interface (lets users be added to and removed from groups that are also stored in the rdb), and I've also added lengths to all unbounded String fields.
If you don't know how to use subversion checkouts in buildout, see: http://pypi.python.org/pypi/mr.developer/