How can I keep the thousands separator when exporting a Splunk Dashboard to PDF? - splunk

Is there a way to have the thousands separator in a table when exporting to pdf?
When I load the dashboard in Splunk, it shows the thousands separator in a table.
But when I export it to PDF, the thousands separator is not displayed.

Related

Export multiple files with InDesign Data Merge

I am creating progress reports around 50 different companies in InDesign. The report is 10 pages long and has approximately 40 images and text fields that need to change based on the company.
I set up a data merge in InDesign and mapped all of the text and image fields. When I execute the data merge the text and images are mapping perfectly but it's creating one large 500 page report (10 pages x 50 companies). I.e. Report for Company A is on pages 1-10, report for Company B is on pages 11-20, and so on.
While I could break this up into individual reports in AcrobatPro, this step seems like it should be unnecessary. How can this be automated, preferably within InDesign? And how would I then be able to save each file based on a field in the merge csv?
I agree with Nicolai, I don't think default data merge option can create split documents.
Maybe you can use the following script to split your documents into the parts
https://creativepro.com/free-script-splits-long-indesign-files/

How to extract table data from pdf and store it in csv/excel using Automation Anywhere?

I want to extract the table data from pdf to excel/csv. How can I do this using Automation Anywhere?
Please find below the sample table from pdf document.
There are multiple ways to extract data from PDFs.
You can extract raw data, formatted data, or create form fields if the layout is consistent.
If the layout is more random, you might want to take a look at IQ Bot, where there are predefined classifications for things like Orders etc.
I would err on using form fields if you have unusual fonts like " for inches character if you have a standard format, since the encoding doesn't map well with the raw/formatted option.
The raw format has some quirks where you don't always get all the characters you expect, such as missing first letter of a data item for raw.
The formatted option is good at capturing tabular columns as they go across the line.

SSRS Export to CSV oddity

I have a report created with a row above the header and a footer. It is treating the row above the header as a seperate column as well as the footer and adding this row over and over for each row.
I need one line as the first line of my CSV file (not to be repeated) and the same for the footer and all of the data inbetween those two lines.
I have tried to take them out of the table and put them into seperate text boxes but this has the same result.
As you can see in the CSV image it treats the HD and TR lines as rows. The HD line should be at the top once only. The TR line should be at the bottom once only.
This is expected, and by design. CSV (and XML) exports are designed to be very generic data preserving formats. They include any dynamic data in the report as a field. All fields will be listed in the header row, where the field name is the name of the text box in the report.
If you need custom formatting such as a header row with report totals, or date of execution, then SSRS CSV export is probably not for you. I'd look at SSIS or writing a custom export application(.exe.) Or you could look at using one of the other export formats from SSRS.

Extract MS Word document chapters to SQL database records?

I have a 300+ page word document containing hundreds of "chapters" (as defined by heading formats) and currently indexed by word. Each chapter contains a medium amount of text (typically less than a page) and perhaps an associated graphic or two. I would like to split the document up into database records for use in an iPhone program - each chapter would be a record consisting of a title, id #, and content fields. I haven't decided yet if I would want the pictures to be a separate field (probably just containing a file name), or HTML or similar style links in the content text. In any case, the end result would be that I could display a searchable table of titles that the user could click on to pull up any given entry.
The difficulty I am having at the moment is getting from the word document to the database. How can I most easily split the document up into records by chapter, while keeping the image associations? I thought of inserting some unique character between each chapter, saving to text format, and then writing a script to parse the document into a database based on that character, but I'm not sure that I can handle the graphics in this scenario. Other options?
To answer my own question:
Given a fairly simply formatted word document
convert it to an Open Office XML document
write a python script to parse the document into a database using the xml.sax python module.
Images are inserted into the record as HTML, to be displayed using a web interface.

Insert rows of states and iso's from comma separated textfile into webbased Mysql

I'm building an international webshop and in the part where a customer has to fill in the address I wonder how I insert the rows in a comma separated text file with a list of stats in my web based MySQL?
Example
AM,04,"Geghark'unik'"
AM,05,"Kotayk'"
AM,06,"Lorri"
AM,07,"Shirak"
AM,08,"Syunik'"
AM,09,"Tavush"
AM,10,"Vayots' Dzor"
I found the whole list here : http://www.maxmind.com/app/fips_include
There's a "File to import" page but I get errors while including the list.
Use MySQL's LOAD DATA INFILE functionality.