I have used similar code to load grids with custom FetchXML but I can't get this one working. I get a Out of Stack space error. I have tried to change up the timer but that hasn't helped. Has something changed in CRM I'm not aware of?
function UpdateSubGridRelatedMatters() {
var grid = document.getElementById("RelatedMTIGrid");
//If this method is called from the form OnLoad, make sure that the grid is loaded before proceeding
//Included extra null check as a rollup 5 fix
var relatedMatterID = Xrm.Page.data.entity.attributes.get("sage_relatedmatter").getValue()[0].id;
//if (relatedMatterID != null) {
//Update the fetchXML that will be used by the grid.
var fetchXml = "";
fetchXml += "<fetch version=\" 1.0\" output-format=\" xml-platform\" mapping=\" logical\" distinct=\" false\" >";
fetchXml += "<attribute name=\" sage_mtiid\" />";
fetchXml += "<attribute name=\" sage_name\" />";
fetchXml += "<attribute name=\" createdon\" />";
fetchXml += "<attribute name=\" sage_scientistisconsultant\" />";
fetchXml += "<attribute name=\" sage_secondlab\" />";
fetchXml += "<attribute name=\" sage_scientisthascfn\" />";
fetchXml += "<attribute name=\" sage_scientist\" />";
fetchXml += "<attribute name=\" sage_redlines\" />";
fetchXml += "<attribute name=\" sage_providermatterid\" />";
fetchXml += "<attribute name=\" sage_organizationcontact\" />";
fetchXml += "<attribute name=\" sage_organization\" />";
fetchXml += "<attribute name=\" sage_matterid\" />";
fetchXml += "<attribute name=\" sage_origin\" />";
fetchXml += "<attribute name=\" sage_materialtype\" />";
fetchXml += "<attribute name=\" sage_hostmatterid\" />";
fetchXml += "<attribute name=\" sage_hostcontact\" />";
fetchXml += "<attribute name=\" sage_host\" />";
fetchXml += "<attribute name=\" sage_executedbyhhmi\" />";
fetchXml += "<attribute name=\" createdby\" />";
fetchXml += "<order attribute=\" createdon\" descending=\" false\" />";
fetchXml += "<order attribute=\" sage_name\" descending=\" false\" />";
fetchXml += "<filter type=\" and\" >";
fetchXml += "<condition attribute=\" sage_relatedmtiid\" operator=\" eq\" value=\" " + relatedMatterID + "\" />";
fetchXml += "</filter>";
fetchXml += "</entity>";
fetchXml += "</fetch>";
if (grid == null || grid.readyState != "complete") {
//The subgrid hasn't loaded, wait 1 second and then try again
setTimeout(UpdateSubGridRelatedMatters(), 3000);
return;
}
debugger;
//Inject the new fetchXml
grid.control.SetParameter("fetchXml", fetchXml);
//Force the subgrid to refresh
grid.control.refresh();
// }
}
You're calling UpdateSubGridRelatedMatters and passing the result rather than the function itself as a parameter to setTimeout which puts you into a recursive loop hence the stackoverflow.
Change
setTimeout(UpdateSubGridRelatedMatters(), 3000);
To
setTimeout(UpdateSubGridRelatedMatters, 3000);
Related
I have been trying to get a PS script to work in extracting files (pdf, word, etc.) from an SQL Server database. I came across the PowerShell script below. The script runs and populates the destination folder but all files are 0 bytes and during the script execution. It throws the error:
"Exporting Objects from FILESTREAM container: .docx
Exception calling "GetBytes" with "5" argument(s): "Invalid attempt to GetBytes on column 'extension'. The GetBytes function can only be used on columns of typ
e Text, NText, or Image.""
Can anyone point me in what am I doing wrong and how to fix this please? Much appreciated.
$Server = ".\xxxxxx";
$Database = "xxxxxx";
$Dest = "C:\DATA\";
$bufferSize = 8192;
$Sql = "
SELECT
[extension]
FROM [XXXXXXXX].[dbo].[XXXXXXdocuments]
";
$con = New-Object Data.SqlClient.SqlConnection;
$con.ConnectionString = "Data Source=$Server;" +
"Integrated Security=True;" +
"Initial Catalog=$Database";
$con.Open();
Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Started ...");
$cmd = New-Object Data.SqlClient.SqlCommand $Sql, $con;
$cmd.CommandTimeout = 120
$rd = $cmd.ExecuteReader();
$out = [array]::CreateInstance('Byte', $bufferSize)
While ($rd.Read())
{
try
{
Write-Output ("Exporting Objects from FILESTREAM container: {0}" -f $rd.GetString(0));
$fs = New-Object System.IO.FileStream ($Dest + $rd.GetString(0)), Create, Write;
$bw = New-Object System.IO.BinaryWriter $fs;
$start = 0;
enter code here
$received = $rd.Getbytes(0, $start, $out, 0, $bufferSize - 1);
While ($received -gt 0)
{
$bw.Write($out, 0, $received);
$bw.Flush();
$start += $received;
$received = $rd.Getbytes(0, $start, $out, 0, $bufferSize - 1);
}
$bw.Close();
$fs.Close();
}
catch
{
Write-Output ($_.Exception.Message)
}
finally
{
$fs.Dispose();
}
}
$rd.Close();
$cmd.Dispose();
$con.Close();
Write-Output ("Finished");
Read-Host -Prompt "Press Enter to exit"
BinaryWriter is unnecessary. It's for writing primitive types to a Stream.
And there's no need to muck around with buffers; you can simply use SqlDataReader.GetStream(int).CopyTo(Stream), eg
$Server = "localhost";
$Database = "adventureworks2017";
$Dest = "C:\temp\";
$Sql = "
SELECT concat('photo', ProductPhotoID, '.jpg') name, LargePhoto from Production.ProductPhoto
";
$con = New-Object Data.SqlClient.SqlConnection;
$con.ConnectionString = "Data Source=$Server;Integrated Security=True;Initial Catalog=$Database;TrustServerCertificate=true";
$con.Open();
Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Started ...");
$cmd = New-Object Data.SqlClient.SqlCommand $Sql, $con;
$cmd.CommandTimeout = 120
$rd = $cmd.ExecuteReader();
While ($rd.Read())
{
try
{
Write-Output ("Exporting: {0}" -f $rd.GetString(0));
$fs = New-Object System.IO.FileStream ($Dest + $rd.GetString(0)), Create, Write;
$rd.GetStream(1).CopyTo($fs)
$fs.Close()
}
catch
{
Write-Output ($_.Exception.Message)
}
finally
{
$fs.Dispose();
}
}
$rd.Close();
$cmd.Dispose();
$con.Close();
Write-Output ("Finished");
Having issues trying to get this code to work. It seems like the pdf it generates in google drive gets stuck in a loop. Each time the script runs it adds to the pdf rather than making a new pdf. I am not that advanced in code so I cant seem to fix this. I went back to the original code and its still happening. TO me it seems either like cache issue or that the temp html file is not getting deleted.
function AUTOPRINTV3() {
var gmailLabels = "AUTOPRINTV3";
var driveFolder = "Incoming Orders 2";
var threads = GmailApp.search("in:" + gmailLabels, 0, 5);
if (threads.length > 0) {
/* Google Drive folder where the Files would be saved */
var folders = DriveApp.getFoldersByName(driveFolder);
var folder = folders.hasNext() ?
folders.next() : DriveApp.createFolder(driveFolder);
/* Gmail Label that contains the queue */
var label = GmailApp.getUserLabelByName(gmailLabels) ?
GmailApp.getUserLabelByName(gmailLabels) : GmailApp.createLabel(driveFolder);
for (var t=0; t<threads.length; t++) {
threads[t].removeLabel(label);
var msgs = threads[t].getMessages();
var html = "";
var attachments = [];
var subject = threads[t].getFirstMessageSubject();
/* Append all the threads in a message in an HTML document */
for (var m=0; m<msgs.length; m++) {
var msg = msgs[m];
html += "From: " + msg.getFrom() + "<br />";
html += "To: " + msg.getTo() + "<br />";
html += "Date: " + msg.getDate() + "<br />";
html += "Subject: " + msg.getSubject() + "<br />";
html += "<hr />";
html += msg.getBody().replace(/<img[^>]*>/g,"");
html += "<hr />";
var atts = msg.getAttachments();
for (var a=0; a<atts.length; a++) {
attachments.push(atts[a]);
}
}
/* Save the attachment files and create links in the document's footer */
if (attachments.length > 0) {
var footer = "<strong>Attachments:</strong><ul>";
for (var z=0; z<attachments.length; z++) {
var file = folder.createFile(attachments[z]);
footer += "<li><a href='" + file.getUrl() + "'>" + file.getName() + "</a></li>";
}
html += footer + "</ul>";
}
/* Conver the Email Thread into a PDF File */
var tempFile = DriveApp.createFile("temp.html", html, "text/html");
folder.createFile(tempFile.getAs("application/pdf")).setName(subject + ".pdf");
tempFile.setTrashed(true);
}
}
}
I'm exprting my databse in csv format, but I can't figure out how to export the mysql table columns names as well. Here is the code:
public function save($query)
{
$stmt = $this->db->prepare($query);
$stmt->execute();
/* header("Content-type: text/csv");
header("Content-Disposition: attachment; filename=file.csv");
header("Pragma: no-cache");
header("Expires: 0");
var_dump($stmt->fetch(PDO::FETCH_ASSOC));
$data = fopen('/tmp/db_user_export_".time().".csv', 'w');
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)){
echo "Success";
fputcsv($data, $row);
} */
$list = array ();
// Append results to array
array_push($list, array("## START OF USER TABLE ##"));
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
array_push($list, array_values($row));
}
array_push($list, array("## END OF USER TABLE ##"));
// Output array into CSV file
$fp = fopen('php://output', 'w');
header('Content-Type: text/csv');
header('Content-Disposition: attachment; filename="file.csv"');
foreach ($list as $ferow) {
fputcsv($fp, $ferow);
}
}
for the query:
include_once 'dbconfig.php';
$query = "SELECT * FROM users";
$crud->save($query);
The file.csv exports correctly but I would like to also include the names of the mysql table columns from where the values are being taken out from.
Thanks in advance!
I was having issues posting my table head into the csv so I took another approach. Hope it helps anyone that's trying to do what I'm doing:
public function export($table)
{
$stmt2 = $this->db->prepare("DESCRIBE ".$table);
$stmt2->execute();
if ($stmt2->rowCount()>0) {
while ($row = $stmt2->fetch(PDO::FETCH_ASSOC)) {
$csv_output .= $row['Field'].", ";
$i++;
}
}
$csv_output .= "\n";
$stmt3 = $this->db->prepare("SELECT * FROM ".$table);
$stmt3->execute();
while ($rowr = $stmt3->fetch(PDO::FETCH_BOTH)) {
for ($j=0;$j<$i;$j++) {
$csv_output .= $rowr[$j].", ";
}
$csv_output .= "\n";
}
$filename = "raport_proiecte_".date("Y-m-d_H-i",time()).".csv";
header("Content-Type: application/xls");
header("Content-Disposition: attachment; filename=$filename");
header("Pragma: no-cache");
header("Expires: 0");
print $csv_output;
exit;
}
I was trying to write to an excel file just the resultset from a query but I keep getting the header column with the row count, which is messing up the subsequent data processing I need to do. I could go in the exported file and delete the first row, but it would be much better if I could export a dataset without the header row.
Here's my hack, I wonder if anyone has a better way to do it. I am taking the generated html and using regex to yank out the header row:
public string DumpToHtmlString<T>(T objectToSerialize, string filePath )
{
string strHTML = "", outpuWithoutHeader ="";
try
{
var writer = LINQPad.Util.CreateXhtmlWriter(true);
writer.Write(objectToSerialize);
strHTML = writer.ToString();
outpuWithoutHeader = Regex.Replace(strHTML, "<tr><td class=\"typeheader\"((\\s*?.*?)*?)<\\/(tr|TR)>", "", RegexOptions.Multiline);
System.IO.File.WriteAllText(filePath, outpuWithoutHeader );
}
catch (Exception exc)
{
Debug.Assert(false, "Investigate why ?" + exc);
}
return outpuWithoutHeader;
}
Is the objectToSerialize an IEnumerable? If so, the LINQPad beta has a WriteCsv method which is designed to create Excel-friendly CSV files:
Util.WriteCsv(data, #"c:\temp\results.csv");
Otherwise, you're safer using the LINQ-to-XML DOM for modifying the output rather than regex. The following code illustrates how to remove formatting from LINQPad output; you can adapt it to remove headings and totals as well:
XDocument doc = XDocument.Load (...);
XNamespace xns = "http://www.w3.org/1999/xhtml";
doc.Descendants (xns + "script").Remove ();
doc.Descendants (xns + "span").Where (el => (string)el.Attribute ("class") == "typeglyph").Remove ();
doc.Descendants ().Attributes ("style").Where (a => (string)a == "display:none").Remove ();
doc.Descendants (xns + "style").Remove ();
doc.Descendants (xns + "tr").Where (tr => tr.Elements ().Any (td => (string)td.Attribute ("class") == "typeheader")).Remove ();
doc.Descendants (xns + "i").Where (e => e.Value == "null").Remove ();
foreach (XElement anchor in doc.Descendants (xns + "a").ToArray ())
anchor.ReplaceWith (anchor.Nodes ());
var presenters = doc.Descendants (xns + "table")
.Where (el => (string)el.Attribute ("class") == "headingpresenter")
.Where (e => e.Elements ().Count () == 2)
.ToArray ();
foreach (var p in presenters)
{
var heading = p.Elements ().First ().Elements ();
var content = p.Elements ().Skip (1).First ().Elements ();
if (stripFormatting)
p.ReplaceWith (heading, new XElement (xns + "p", content));
else
p.ReplaceWith (
new XElement (xns + "br"),
new XElement (xns + "span", new XAttribute ("style", "color: green; font-weight:bold; font-size: 110%;"), heading),
content);
}
// Excel centre-aligns th even if the style says otherwise. So we replace them with td elements.
foreach (var th in doc.Descendants (xns + "th"))
{
th.Name = xns + "td";
if (!stripFormatting && th.Attribute ("style") == null)
th.Add (new XAttribute ("style", "font-weight: bold; background-color: #ddd;"));
}
string finalResult = doc.ToString().Replace ("Ξ", "").Replace ("▪", "");
i want to update all row's alternativegroups with $id value.
the action getting error: coldchain can not be null.
when i search the db for pk = $a, It's coldchain value is boolen(false). and db is postgresql
how can i set $q->attributes without posting other values?
public function actionUpdate($id){
if (isset($_POST['forms'])){
$arr = explode(',', $_POST['forms']);
foreach ($arr as $a){
$q = MedicineDrugform::model()->findbypk($a);
$q->alternativegroup = $id;
if ($q->save()){
echo $q->id."q saved <br />";
}
else {
echo "<pre>";
print_r($q->getErrors());
}
die();
$qu = MedicineDrugformUpdate::model()->findbyattributes(array('drug_form_id'=>$a));
$quu = MedicineDrugformUpdate::model()->findbypk($qu->id);
$quu->alternativegroup = $id;
if ($quu->save()){
echo $quu->id."qu saved <br />";
}
}
die();
$this->render('/site/messages', array('message'=>'formsaved'));
}
$this->render('add', array('id'=>$id));
}
maybe it cant find the corresponding record for
$q = MedicineDrugform::model()->findbypk($a);