Execute coldfusion code stored in a string dynamically? - dynamic

I have an email body stored as a string in a database, something like this:
This is an email body containing lots of different variables. Dear #name#, <br/> Please contact #representativeName# for further details.
I pull this field from the database using a stored proc, and then I want to evaluate it on the coldfusion side, so that instead of "#name#", it will insert the value of the name variable.
I've tried using evaluate, but that only seems to work if there's just a variable name. It throws an error because of the other text.
(I can't just use placeholders and a find/replace like this - Resolving variables inside a Coldfusion string, because the whole point of storing this in a database is that the variables used to build the string are dynamic. For example, in one case the name field can be called "name" and in another it could be "firstName", etc.)

I would loop over each #variableName# reference and replace it with the evaluated version.
A regex will be able to find them all and then a loop to go over them all and just evaluate them one by one.

You need to write it to a file and CFINCLUDE it. This will incur a compilation overhead, but that's unavoidable.
Can you not save the code to the file system and just store a reference to where it is in the DB? That way it'll only get recompiled when it changes, rather than every time you come to use it?
<!--- pseudo code --->
<cfquery name="q">
SELECT fileContent // [etc]
</cfquery>
<cfset fileWrite(expandPath("/path/to/file/to/write/code.cfm"), q.fileContent)>
<cfinclude template="/path/to/file/to/write/code.cfm">
<cfset fileDelete(expandPath("/path/to/file/to/write/code.cfm"))>
That's the basic idea: get the code, write the code, include the code, delete the code. Although you'll want to make sure the file that gets created doesn't collide with any other file (use a UUID as the file name, or something, as per someone else's suggestion).
You're also gonna want to load test this. I doubt it'll perform very well. Others have suggested using the virtual file system, but I am not so sure there'll be much of a gain there: it's the compilation process that takes the time, not the actual file ops. But it's worth investigating.

Are you using ColdFusion 9 or Railo? If yes, writing to and including in-memory files may be quick and simple solution. Just generate file name with something like CreateUUID() to avoid collisions.

So basically, after researching and reading the answers, it seems that these are my options:
Have separate fields in the table for each variable and evaluate them individually. e.g. nameVariable, reprNameVariable, and then I can build the body with code like this:
This is an email body containing lots of different variables. Dear #evaluate(nameVariable)#, <br/> Please contact #evaluate(reprNameVariable)# for further details.
Have an "emailBody" field with all the text and applicable field names, and write it to a temporary file and cfinclude it. (As suggested by Adam Cameron, and I think that's what Sergii's getting at)
Have an "emailBody" field and write code to loop over it, find all coldfusion variables, and replace them with their "evaluate"d version. (As suggested by Dale Fraser)
Have small template files, one for each report type, with the email body, and have a field "emailBodyTemplate" that indicates which template to include. (As suggested by Adam Cameron)
Now I just have to decide which to use :) When I do, I'll accept the answer of the person who suggested that method (unless it's one that wasn't suggested, in which case I'll probably accept this, or if someone comes up with another method that makes more sense)

It's been a while since you posted this - but it's EXACTLY what I do. I found your question while looking for something else.
I've simply created my own simple syntax for variables when I write my emails into the database:
Hello ~FirstName~ ~LastName~,
Then, in my sending cfm file, I pull the email text from the database, and save it to a variable:
<cfset EmailBody = mydatabasequery.HTMLBody>
Then I quickly strip my own syntax with my variables (from another query called RecipientList):
<cfset EmailBody = ReplaceNoCase(EmailBody, "~FirstName~", "#RecipientList.First#", "ALL")>
<cfset EmailBody = ReplaceNoCase(EmailBody, "~LastName~", "#RecipientList.Last#", "ALL")>
Then I simply send my email:
<cfmail ....>#EmailBody#</cfmail>
I hope you manage to see this. If you control the authoring of the email messages, which I suspect you do, this should work well.
Russell

Related

ListObjects.Add.QueryTable Source Array String

I will provide some context before I ask my question.
I am attempting to query an SQL Server and create a table within Excel from the data. Because I am not familiar with how to accomplish this in VBA I recorded by using Data -> Get External Data -> From Other Sources -> Microsoft Query. In the dialog box that appears, I chose a .DSN file provided to me by someone else. I then used the Microsoft Query interface to structure the query and import the data onto a worksheet.
The code in the recorded macro looked something like this. I will use generic terms instead of the actual code.
With Sheet2.ListObjects.Add(SourceType:= 0, Source:=Array _
(Array("ODBC;DRIVER=SQL Server;SERVER=ServerName;UID=userid;Trusted_Connection=Yes;APP=Microsoft Windows Operating System;WSID=SomeString"), _
Array("A;DATABASE=DatabaseName")), Destination:=Range ("Sheet2!$A$1")).QueryTable
I know this is not formatted ideally, which is part of my question below.
https://msdn.microsoft.com/en-us/library/bb211863(v=office.12).aspx
From the above article, I know that SourceType:= 0 is an xlSrcExternal, or an external data source. This makes sense to me.
My confusion begins to arise when I get to the Source component of the Add method. From the provided article, "When SourceType = xlSrcExternal, an array of String values specifying a connection to the source, containing the following elements:
•0 - URL to SharePoint site
•1 - ListName
•2 - ViewGUID
So to begin with, what exactly is meant by "an array of String values", as the code from the recorded macro does not appear to correspond to what I thought was an array. I know that normally an array is declared something like this Array("string1", "string2", etc.). Or is the array recorded simply an array of one value? In other words Array("string1"). Does anyone know the purpose of passing an "array of string values" as opposed to just passing a string?
Also does anyone know the nuances of why the recorded macro has this particular formatting/syntax? In other words, why does it appear to have this syntax Array(Array("string1"),_ (new line) Array("string2"))? Why not just Array ("string1")? Does it have something to do with the second line being too long?
I have several more questions related to this topic, but this seemed like a good place to start..
Thank you all for any help given.

Extracting information from a file variable in d3 pick basic

I have a file variable in d3 pick basic and I am trying to figure out what file it corresponds to.
I tried the obvious thing which was to say:
print f *suppose the file variable's name is f in this case
but that didn't work, because:
SELECTION: 58[B34] in program "FILEPRINTER", Line 7: File variable used
where string expression expected.
I also tried things like:
list f *didn't compile
execute list dict f *same error
execute list f *same error
but those also did not work.
In case any one is wondering, the reason I am trying to do this in the first place is that there is a global variable that is passed up and down in the code base I am working with, but I can't find where the global variable gets its value from.
That file pointer variable is called a "file descriptor". You can't get any information from it.
You can use the file-of-files to log Write events, and after a Write is performed by the code, check to see what file was updated. The details for doing this would be a bit cumbersome. You really should rely on the Value-Add Reseller or contract with competent assistance for this.
If this is not a live end-user system, you can also modify an item getting written with some very unique text like "WHAT!FILE!IS!THIS?". Then you can do a Search-System command to search the entire account (or system) to find that text. See docs for proper use of that command.
This is probably the best option... Inject the following:
IF #USER = "CRISZ" THEN ; * substitute your user ID
READU FOO FROM F,"BLAH" ELSE
DEBUG
RELEASE F,"BLAH"
END
END
That code will stop only for one person - for everyone else it will flow as normal. When it does stop, use the LIST-LOCKS command to see which file has a read lock for item "BLAH". That's your file! Don't forget to remove and recompile the code. Note that recompiling code while users are actively using it results in aborts. It's best to do this kind of thing after hours or on a test system.
If you can't modify the code like that, diagnostics like this can be difficult. If the above suggestions don't help, I think this challenge might be beyond your personal level of experience yet and recommend you get some help.
If suggestion here Does help, please flag this as the answer. :)

How to send a dynamic message email

I'm writing a program that will scan a barcode from a serial scanner and store the information in a MySQL database.
In the program I would like to check for when the scanner is not working. If it's not working then send me an email. I set a variable called MessageBody which sets the body of the email that is going to be sent. The subroutine is called EmailError(). So I can call it in other subroutines. I would like to take a variable or something and put it in the Message body so I can have a dynamic message body and not a static boring one.
Any Ideas?
I'm sure I wasn't clear enough so please ask questions.
I found a link that might be of use to you.
http://www.codeproject.com/KB/IP/mailtemplates.aspx
You could create a simpler version, maybe have a good file and a bad file, in the case of putting together the body of the email in the EmailError method you could simply read the file in, replace certain keywords like with variable values. In the past I've used something that the String.Format can understand, so the file would have something like
Dear {0},
Thank you for submitting your message. You are customer number {1}.
If you have any questions please contact our helpdesk at: 555-555-5555 or email them at helpdesk#somedomain.com
Then opening the file for read, storing it in a variable and modifying the values using string value = String.Format(myFileContents, firstName, customerNumber); would be the email body.
Hopefully this helps a bit. There are many other techniques too, xslt for transforming, jxls (velocity and excel templates in java) or the NVelocity version for .net http://nvelocity.sourceforge.net/

TSearch2 - dots explosion

Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).

SSIS save string variable to text file

It seems like it should be simple but as of yet I havent found a way to save the value stored in an SSIS string variable to a text file. I've looked at using the flat file destination inside of a data flow but that requires a data flow source.
Any ideas on how to do this?
Use a script task.
I just tried this. I created a File connection manager, with the connection string pointing to the file I wanted to write to. I then created a string variable containing the text to write.
I added a Script Task, specified my string variable in the Read Only Variables list, then clicked Edit Script. The script was as follows:
public void Main()
{
ConnectionManager cm = Dts.Connections["File.tmp"];
var path = cm.ConnectionString;
var textToWrite = (string)Dts.Variables["User::StringVariable"].Value;
System.IO.File.WriteAllText(path, textToWrite);
Dts.TaskResult = (int)ScriptResults.Success;
}
This worked with no problems.
Here's a little sample of some code that worked in a SQL CLR in C#. You'll need to use VB if you're on 2005 I believe. The script task also needs the read variable property set to MyVariable to make the value of your variable available to it.
// create a writer and open the file
TextWriter tw = new StreamWriter("\\\\server\\share$\\myfile.txt");
// write a line of text to the file
tw.WriteLine(Dts.Variables["MyVariable"].Value);
// close the stream
tw.Close();
All it takes is one line of code in a simple Script task. No other dependencies, such as a connection manager, are needed.
Here's what it would look like in C#:
public void Main()
{
string variableValue = Dts.Variables["TheVariable"].Value.ToString();
string outputFile = Dts.Variables["Path"].Value.ToString();
System.IO.File.WriteAllText(outputFile, variableValue);
Dts.TaskResult = (int)ScriptResults.Success;
}
Obviously the most important line here is the one containing the WriteAllText function call.
The Path variable should contain a full path + filename for the output file.
Ok, I have an answer that doesn't involve use of script task. Pick some oledb sql source you have that's simple and you have a lot of control over. Make a query that returns only one row. Then put this query in a string variable:
"select vara, ' var =: " + #[User:varIWantToSee] + "' as myvar from tablea where vara = 1"
Then in OLEDB source pick "SQL command from a variable"
For varIWantToSee make sure you initialize it with a lot characters or ssis makes a very small length for that column that it doesn't let you override. At run time varIWantToSee will get set and you can see it. Pump this all into a flat file destination and you are in business. Why do some people have to do this? Because some people need to know the value of the variables in the runtime environment, their laptop development doesn't show the variable values they need. In my case I was running this on an Azure environment that had the database accesses I needed to test. If I were microsoft I would create a task that shows the runtime variable value at that stage of the job by writing it to the ssis log file created when the package runs. If someone knows how to do that, please enlighten us.
It's possible to use a Derived Column transformation to write the value of a variable into a column. The problem is that it needs a source to drive it, and there's no stock data source you can use that just spits out a null row onto the pipeline.
So, either you repurpose a single-row source to drive the derived column transformation, or you do what another answer suggests, and do it with a Script source.
I did it the way you described. I already had a oledb connection manager defined so I used an OLE DB Source and used the SQL Command data access mode. I used a simple query:
select getdate() as dt
...just to get it out of the way. Now I know the date of my variable pull. Then I used a Derived Column Transform to make my package variables available and wrote it out to a flat file.
Elegant? No, but it gets the job done.
Lets say you don't want to mess with Script tasks and you don't have a database you can connect to just to issue a data source command like:
SELECT 'Some arbitrary text'
There are still several ways to use a Process task for something as simple as writing a line of text to a file. For example you can use PowerShell with an input variable built using the following expression:
"'"+REPLACE(#[User::Text],"'","''")+"' > '"+REPLACE(#[User::Filename],"'","''")+"'"
Notice I escaped the filename because single quotes are legal there. Also note I used '>' for redirecting which overwrites the file if it exists. If I wanted to append I'd use '>>'.
Initially I had trouble with this method when User::Text contained multiple lines. It turns out you need some extra EOL characters after your filename when a command spans lines. Like this:
"'"+REPLACE(#[User::Text],"'","''")+"' > '"+REPLACE(#[User::Filename],"'","''")+"'\r\n\r\n"
Using cmd.exe with echo is a bit more precarious but can also work in certain circumstances and has much less overhead.
P.S. I've noticed with some versions of PowerShell that StandardInputVariable content is ignored without this:
-Command -
in the Arguments box. A lone minus sign as a Command argument is 'magic' and documented at https://learn.microsoft.com/en-us/powershell/scripting/powershell.exe-command-line-help. I believe all versions of PowerShell accept this param so even if it's not required for your version you may want to include it since it shouldn't break anything and may keep your code from breaking if PowerShell is updated to a version that requires it.