command(A) :-
open('tree.txt',write,Stream),
( s(X,A,[]),
write(Stream,X),
fail
; true
),
close(Stream).
In the above code I am attempting to write the consequent syntax tree of a sentence to a file called tree.txt. s(x,A,[]) represents the sentence I am calling, where X is the syntax tree. I am calling this predicate while running the grammar in SWI Prolog and getting a "true" value returned, however the syntax tree is not being written to the text file (in the same directory as my grammar file).
The whole grammar file is as follows (if necessary):
command(A) :- open('tree.txt',write,Stream),
(s(X,A,[]) , write(Stream,X), fail;true),
close(Stream).
% writer(X) :- open('tree.txt', write,Stream),
% write(Stream,X),
% close(Stream).
s(s(X)) --> cmd(X).
s(s(X)) --> assertion(X).
s(s(X)) --> qstn(X).
qstn(qstn(qw,subject,X)) --> qw, subject, np(X).
qstn(qstn(qw,is,X)) --> qw, [is], np(X).
assertion(assertion(subject,is,X)) --> subject, [is], np(X).
cmd(cmd(X,Y)) --> v(X), np(Y).
np(np(det,X)) --> det, n(X).
np(np(X)) --> n(X).
np(np(adj,X)) --> adj, n(X).
n(n(A)) --> [A], {lex(A,n)}.
v(v(A)) --> [A], {lex(A,v)}.
qw --> [A], {lex(A,qw)}.
det --> [A], {lex(A,det)}.
adj --> [A], {lex(A,adj)}.
subject --> [A], {lex(A,subject)}.
lex(this,subject).
lex(that,subject).
lex(build, v).
lex(move, v).
lex(walk, v).
lex(is,v).
lex(block, n).
lex(structure,n).
lex(where, qw).
lex(is, qw).
lex(what, qw).
lex(the, det).
lex(a, det).
lex(at, conj).
lex(on, conj).
where the "writer/1" function was an earlier attempt at writing to file.
I can provide more information if necessary, and I've tried what was suggested in another similar question (didn't work).
Can anybody help me?
Related
How do I re-use a piece of logic within a transformation?
Right now the transformation lookes like this:
Read file 1 --> Step 1 --> Step 2 --> ... --> Step n --> Write file 1
Read file 2 --> Step 1 --> Step 2 --> ... --> Step n --> Write file 2
...
Read file 50 --> Step 1 --> Step 2 --> ... --> Step n --> Write file 50
The transformation logic (Step 1 to Step n) is the same for all input files.
What I want to have is:
Read file 1 --> Call Transformation logic --> Write file 1
Read file 2 --> Call Transformation logic --> Write file 2
...
Read file 50 --> Call Transformation logic --> Write file 50
Transformation logic:= Step 1 --> Step 2 --> ... --> Step n
I know there is the Mapping (sub-transformation) step.
However I would end up with two .ktr files: one for the parent transformation (containing file input, sub-transformation call and file output), and one for the sub-transformation (containing the steps 1 to n).
I do not want to split the transformation into two files only for the purpose of re-using the Steps 1 to n. So how do I use the functionality of the Mapping (sub-transformation) step without ending up with two .ktr files?
Thanks in adavance.
In SSIS, we have loop container where every steps in loop container will execute until loop end.
But In PDI, we don't have any loop container. You must have to use two transformation to full fill your requirements. Or you can use JavaScript step to write your condition if possible.
Going at the root of Agda standard library, and issuing the following command:
grep -r "module _" . | wc -l
Yields the following result:
843
Whenever I encounter such anonymous modules (I assume that's what they are called), I quite cannot figure out what their purpose is, despite of their apparent ubiquity, nor how to use them because, by definition, I can't access their content using their name, although I assume this should be possible, otherwise their would be no point in even allowing them to be defined.
The wiki page:
https://agda.readthedocs.io/en/v2.6.1/language/module-system.html#anonymous-modules
has a section called "anonymous modules" which is in fact empty.
Could somebody explain what the purpose of anonymous modules is ?
If possible, any example to emphasize the relevance of the definition of such modules, as well as how to use their content would be very much appreciated.
Here are the possible ideas I've come up with, but none of them seems completely satisfying:
They are a way to regroup thematically identical definitions inside an Agda file.
Their name is somehow infered by Agda when using the functions they provide.
Their content is only meant to be visible / used inside their englobing module (a bit like a private block).
Anonymous modules can be used to simplify a group of definitions which share some arguments. Example:
open import Data.Empty
open import Data.Nat
<⇒¬≥ : ∀ {n m} → n < m → n ≥ m → ⊥
<⇒¬≥ = {!!}
<⇒> : ∀ {n m} → n < m → m > n
<⇒> = {!!}
module _ {n m} (p : n < m) where
<⇒¬≥′ : n ≥ m → ⊥
<⇒¬≥′ = {!!}
<⇒>′ : m > n
<⇒>′ = {!!}
Afaik this is the only use of anonymous modules. When the module _ scope is closed, you can't refer to the module anymore, but you can refer to its definitions as if they hadn't been defined in a module at all (but with extra arguments instead).
I would like to write a module that exports a predicate where the user should be able to access a predicate p/1 as a prefix operator. I have defined the following module:
:- module(lala, [p/1]).
:- op(500, fy, [p]).
p(comment).
p(ca).
p(va).
and load it now via:
?- use_module(lala).
true.
Unfortunately, a query fails:
?- p X.
ERROR: Syntax error: Operator expected
ERROR: p
ERROR: ** here **
ERROR: X .
After setting the operator precedence properly, everything works:
?- op(500, fy, [p]).
true.
?- p X.
X = comment ;
X = ca ;
X = va.
I used SWI Prolog for my output but the same problem occurs in YAP as well (GNU Prolog does not support modules). Is there a way the user does not need to set the precedence themselves?
You can export the operator with the module/2 directive.
For example:
:- module(lala, [p/1,
op(500, fy, p)]).
Since the operator is then also available in the module, you can write for example:
p comment.
p ça.
p va.
where p is used as a prefix operator.
I'm trying to create a script in (g)AWK in which I'd like to put the following EXACT lines at the beginning of the output text file:
<?xml version="1.0" encoding="UTF-8"?>
<notes version="1">
<labels>
<label id="0" color="30DBFF">Custom Label 1</label>
<label id="1" color="30FF97">Custom Label 2</label>
<label id="2" color="E1FF80">Custom Label 3</label>
<label id="3" color="FF9B30">Custom Label 4</label>
<label id="4" color="FF304E">Custom Label 5</label>
<label id="5" color="FF30D7">Custom Label 6</label>
<label id="6" color="303EFF">Custom Label 7</label>
<label id="7" color="1985FF">Custom Label 8</label>
</labels>
and this one to the end:
</notes>
Here is my script so far:
BEGIN {printf("<?xml version="1.0" encoding="UTF-8"?>\n") > "notes.sasi89.xml"}
END {printf("</notes>") > "notes.sasi89.xml"}
My problem is that it's not printing the way I'd like, it gives me this in the output file:
<?xml version=1 encoding=-8?>
</notes>
Some characters and quotes are missing, I've tried studying manuals but those are sound too complicated to me, I would appriciate if someone would give me a hand or put me to the right direction.
Answer is Community Wiki to give what credit can be given where credit is due.
Primary problem and solution
As swstephe noted in a comment:
You need to escape your quotes:
printf("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n")
Anti-patterns
I regard your outline script as an anti-pattern (actually, two anti-patterns). You have:
BEGIN {printf("<?xml version="1.0" encoding="UTF-8"?>\n") > "notes.sasi89.xml"}
END {printf("</notes>") > "notes.sasi89.xml"}
The anti-patterns are:
You repeat the file name; you shouldn't. You would do better to use:
BEGIN {file = "notes.sasi89.xml"
printf("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n") > file}
END {printf("</notes>") > file}
You shouldn't be doing the I/O redirection in the awk script in the first place. You should let the shell do the I/O redirection.
awk '
BEGIN {printf("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n")}
END {printf("</notes>")}
' > notes.sasi89.xml
There are times when I/O redirection in the script is appropriate, but that's when you need output to multiple files. When, as appears very probable here, you have just one output file, make the script write to standard output and have the shell do the I/O redirection. It is much more flexible; you can rename the file more easily, and send the output to other programs via a pipe, etc, which is very much harder if you have the output file name embedded in the awk script.
I have to load several tables into SQL Server 2012 from SQL Server 2000. I heard BIDS could do this and I'm pretty new to it and wanted to some help. I would really appreciate whatever help I get with it.
I have Installed BIDS helper. already and used the below code. But it gives me errors stating,
Error 1187 Illegal syntax. Expecting valid start name character.
Error 1188 Character '#', hexadecimal value 0x23 is illegal in an XML name.
Error 1189 The character '#', hexadecimal value 0x40 is illegal at the beginning of an XML name.
<## template language="C#" hostspecific="true" #>
<## import namespace="System.Data" #>
<## import namespace="System.Data.SqlClient" #>
<## import namespace="System.IO" #>
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<!--
<#
string connectionStringSource = #"Server=xxxxx;Initial Catalog=xxxx;Integrated Security=SSPI;Provider=sqloledb";
string connectionStringDestination = #"Server=xxxxxx;Initial Catalog=xxxxxxx;Integrated Security=SSPI;Provider=SQLNCLI11.1";
string SrcTableQuery = #"
SELECT
SCHEMA_NAME(t.schema_id) AS schemaName
, T.name AS tableName
FROM
sys.tables AS T
WHERE
T.is_ms_shipped = 0
AND T.name <> 'sysdiagrams';
";
DataTable dt = null;
dt = ExternalDataAccess.GetDataTable(connectionStringSource, SrcTableQuery);
#>
-->
<Connections>
<OleDbConnection
Name="SRC"
CreateInProject="false"
ConnectionString="<#=connectionStringSource#>"
RetainSameConnection="false">
</OleDbConnection>
<OleDbConnection
Name="DST"
CreateInProject="false"
ConnectionString="<#=connectionStringDestination#>"
RetainSameConnection="false">
</OleDbConnection>
</Connections>
<Packages>
<# foreach (DataRow dr in dt.Rows) { #>
<Package ConstraintMode="Linear"
Name="<#=dr[1].ToString()#>"
>
<Variables>
<Variable Name="SchemaName" DataType="String"><#=dr[0].ToString()#></Variable>
<Variable Name="TableName" DataType="String"><#=dr[1].ToString()#></Variable>
<Variable Name="QualifiedTableSchema"
DataType="String"
EvaluateAsExpression="true">"[" + #[User::SchemaName] + "].[" + #[User::TableName] + "]"</Variable>
</Variables>
<Tasks>
<Dataflow
Name="DFT"
>
<Transformations>
<OleDbSource
Name="OLE_SRC <#=dr[0].ToString()#>_<#=dr[1].ToString()#>"
ConnectionName="SRC"
>
<TableFromVariableInput VariableName="User.QualifiedTableSchema"/>
</OleDbSource>
<OleDbDestination
Name="OLE_DST <#=dr[0].ToString()#>_<#=dr[1].ToString()#>"
ConnectionName="DST"
KeepIdentity="true"
TableLock="true"
UseFastLoadIfAvailable="true"
KeepNulls="true"
>
<TableFromVariableOutput VariableName="User.QualifiedTableSchema" />
</OleDbDestination>
</Transformations>
</Dataflow>
</Tasks>
</Package>
<# } #>
</Packages>
</Biml>
This is the maddening thing about trying to do much BimlScript in Visual Studio. The editor "knows" it's doing XML markup so all of the enhancements that make up BimlScript are "wrong" and so it's going to highlight them and put angry red squigglies and have you question whether you really have valid code here.
In my Error List, I see the same things you're seeing but this is one of the few times you can ignore Visual Studio's built in error checker.
Instead, the true test of whether the code is good is right click on the .biml file(s) and select "Check Biml for errors"
You should get a dialogue like
If so, click Generate SSIS Packages and then get some tape to attach your mind back into your head as it's just been blown ;)
Operational note
Note that the supplied code is going to copy all of the data from the source to the target. But, you've also specified that this is going to be a monthly operation so you'd either want to add a truncate step via an Execute SQL Task, or factor in a Lookup Transformation (or two) to determine new versus existing data and change detection.