Aspose Cell .net core: Some text is cut off on linux host - asp.net-core

I got an issue when develop an export feature using aspose on .net core platform. The issue is that some of the rows data exported as PDF is being cut-off.
I tried on my local environment (windows) there are no issue. The result is as good as I expected.
But when I publish the code into host environment (linux) the result seems off.
My local result (windows)
My hosted result (linux)
Here is piece of code that I am using to generate the file
public FileData ExportToFile(DataTable data, string format, string query)
{
var workbook = new Workbook();
var style = workbook.CreateStyle();
workbook.DefaultStyle = style;
var sheet = workbook.Worksheets[0];
AutoFitterOptions oAutoFitterOptions = new AutoFitterOptions { AutoFitMergedCells = true, OnlyAuto = true };
data = _RemoveFormat(data); // Parse all data to String
/**/
sheet.PageSetup.Orientation = PageOrientationType.Landscape;
PdfSaveOptions pdfSaveOptions = new PdfSaveOptions();
pdfSaveOptions.AllColumnsInOnePagePerSheet = true;
sheet.AutoFitRows(oAutoFitterOptions);
sheet.AutoFitColumns(oAutoFitterOptions);
workbook.Save(stream, pdfSaveOptions);
/**/
}

After discussing with Aspose support. I found out that the differences caused by that missing font from my Hosted Server (linux). So I need to install it on the server to get both document have the same style.
In my case, the my app was hosted in kubernetes cluster, so I need to add this line of commands in the dockerfile
RUN echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections
RUN apt-get update \
&& apt-get install -y --allow-unauthenticated \
fontconfig \
ttf-mscorefonts-installer

Related

How to create a Hashicorp Vault user using Terraform

I am trying to create a Vault user in Terraform but can't seem to find the appropriate command to do so. I've searched the Terraform Registry and also performed some online searches but all to no avail.
All I'm looking to do is create a user, using the corresponding Terraform command to the Vault CLI command below:
vault write auth/userpass/users/bob password="passworld123" policies="default"
Any suggestions?
#hitman126 I guess you can take use of 'vault' provider module and 'vault_auth_backend' resource block. I guess your code should look like something similar to below
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.5.0"
}
}
}
provider "vault" {
}
resource "vault_auth_backend" "example" {
type = "userpass"
}
resource "vault_generic_secret" "developer_sample_data" {
path = "secret/foo"
data_json = <<EOT
{
"username": "bob",
"password": "passworld123"
}
EOT
}
In above code block, path is one full logic path where we write given data.To write data into the "generic" secret backend mounted in Vault by default, this should be prefixed with 'secret/'.
This might not be a full-fledged solution, but you can try something like this
Solution-2 :
If you have installed vault in machine and you would like to achieve above use case using vault command alone(if you don't want to use terraform-vault provider), then you can try something below
create one small sh script with above vault command. (valut-write.sh)
touch vault-write.sh
let content of script can be similar to below
#!/bin/sh
vault write auth/userpass/users/bob password="passworld123" policies="default"
chmod +x vault-write.sh
Create a .tf file with null resource, local-exec provisioner and invoke this sh script.
touch vault.tf
contents of vault.tf file can be similar to below
terraform {
required_version = "~> 1.1.1"
}
resource "null_resource" "vault_write" {
provisioner "local-exec" {
command = "/bin/sh vault-write.sh"
}
}

PBIVIZ not recognizing certificate

I've been updating my development environment with the latest pbiviz stuff
I did a:
npm i -g powerbi-visuals-tools
and:
pbiviz --install-cert
in Windows terminal/powershell
Then I opened a project in Visual Code and using terminal did a:
pbiviz package
info Building visual...
info Installing API: ~3.8.0...
Certificate is invalid!
warn Local valid certificate not found.
info Checking global instance of pbiviz certificate...
warn Global instance of valid pbiviz certificate not found.
info Generating a new certificate...
info Certificate generated. Location is C:\Users\mike\AppData\Roaming\npm\node_modules\powerbi-visuals-tools\certs\PowerBICustomVisualTest_public.pfx. Passphrase is '4492518445773821'
info Start preparing plugin template
info Finish preparing plugin template
error error:0308010C:digital envelope routines::unsupported
C:\Users\mike\AppData\Roaming\npm\node_modules\powerbi-visuals-tools\node_modules\powerbi-visuals-webpack-plugin\index.js:185
throw new Error("Failed to generate visualPlugin.ts");
^
Error: Failed to generate visualPlugin.ts
at C:\Users\mike\AppData\Roaming\npm\node_modules\powerbi-visuals-tools\node_modules\powerbi-visuals-webpack-plugin\index.js:185:12
at async PowerBICustomVisualsWebpackPlugin._beforeCompile (C:\Users\mike\AppData\Roaming\npm\node_modules\powerbi-visuals-tools\node_modules\powerbi-visuals-webpack-plugin\index.js:177:4)
Node.js v17.0.0
I've tried uninstalling, reatarting and various incantations, but it doesn't want to go.
Is my certificate really invalid? How do I check it? Are there any diagnostics I can run?
Any and all advice gladly accepted
I just updated to pbiviz -V
3.4.1
same problem
After a debug session we found an error in powerbi-visuals-tools#3.4.1 where the check for certificate in certificatetoosl.js uses the text date of the certificate expiry date, whihc in my case is dd/mm/yyyy which fails because this expects and ISO 8601, but will work with mm/dd/yyyy
[![debug image][1]][1]
This is the code:
// For Windows OS:
if (os.platform() === "win32") {
if (!fs.existsSync(pfxPath) || !passphrase) {
return false;
}
let certStr = await exec(`certutil -p ${passphrase} -dump "${pfxPath}"`);
let certStrSplitted = certStr.split('\r\n');
let regex = /(?<=: ).*/;
endDateStr = regex.exec(certStrSplitted[6]);
}
// For Linux and Mac/darwin OS:
else if (os.platform() === "linux" || os.platform() === "darwin") {
if (!fs.existsSync(certPath)) {
return false;
}
endDateStr = await exec(`openssl x509 -enddate -noout -in ${certPath} | cut -d = -f 2`);
}
let endDate = new Date(Date.parse(endDateStr));
verifyCertDate = (endDate - new Date()) > certSafePeriod;
if (verifyCertDate) {
ConsoleWriter.info(`Certificate is valid.`);
} else {
ConsoleWriter.warn(`Certificate is invalid!`);
removeCertFiles(certPath, keyPath, pfxPath);
}
We don't have a full solution but there will be workarounds until the package is fixed. Deleting all the modules and reinstalling seemed to fix the visualPlugin.ts problem as well.
[1]: https://i.stack.imgur.com/XVrsQ.png

How to store Terraform provisioner "local-exec" output in local variable and use variable value in "remote-exec"

I am working with Terraform provisionar. and in one scenario I need to execute a 'local-exec' provisionar and use the output [This is array of IP addesses] of the command into next 'remote-exec' provisionar.
And i am not able to store the 'local-exec' provisionar output in local variable to use later. I can store it in local file but not in intermediate variable
count = "${length(data.local_file.instance_ips.content)}"
this is not working.
resource "null_resource" "get-instance-ip-41" {
provisioner "local-exec" {
command = "${path.module}\\scripts\\findprivateip.bat > ${data.template_file.PrivateIpAddress.rendered}"
}
}
data "template_file" "PrivateIpAddress" {
template = "/output.log"
}
data "local_file" "instance_ips" {
filename = "${data.template_file.PrivateIpAddress.rendered}"
depends_on = ["null_resource.get-instance-ip-41"]
}
output "IP-address" {
value = "${data.local_file.instance_ips.content}"
}
# ---------------------------------------------------------------------------------------------------------------------
# Update the instnaces by installing newrelic agent using remote-exec
# ---------------------------------------------------------------------------------------------------------------------
resource "null_resource" "copy_file_newrelic_v_29" {
depends_on = ["null_resource.get-instance-ip-41"]
count = "${length(data.local_file.instance_ips.content)}"
triggers = {
cluster_instance_id = "${element(values(data.local_file.instance_ips.content[count.index]), 0)}"
}
provisioner "remote-exec" {
connection {
agent = "true"
bastion_host = "${aws_instance.bastion.*.public_ip}"
bastion_user = "ec2-user"
bastion_port = "22"
bastion_private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
user = "ec2-user"
private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
host = "${self.triggers.cluster_instance_id}"
}
inline = [
"echo 'license_key: 34adab374af99b1eaa148eb2a2fc2791faf70661' | sudo tee -a /etc/newrelic-infra.yml",
"sudo curl -o /etc/yum.repos.d/newrelic-infra.repo https://download.newrelic.com/infrastructure_agent/linux/yum/el/6/x86_64/newrelic-infra.repo",
"sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra'",
"sudo yum install newrelic-infra -y"
]
}
}
Unfortunately you can't. The solution I have found is to instead use an external data source block. You can run a command from there and retrieve the output(s), the only catch is that the command needs to produce json to standard output (stdout). See documentation here. I hope this is some help to others trying to solve this problem.

mbrola Binary for linux CentOS

I am trying to use mbrola binary on CentOS box. I tried many binary listed on below page but none is working.
http://www.tcts.fpms.ac.be/synthesis/mbrola/mbrcopybin.html
I am getting following error -
Processing Utterance: com.sun.speech.freetts.ProcessException: Cannot start mbrola program:
I believe this is most likely incompatible binary for CentOS.
Can you please tell me if there is a binary available for CentOS ?
Code -
public static void createAudioFile(String text, String fileName) {
AudioPlayer audioPlayer = null;
//System.setProperty("freetts.voices", "com.sun.speech.freetts.en.us.cmu_time_awb.AlanVoiceDirectory");
System.setProperty("mbrola.base", Constants.mbrolaDiskPath);
Voice voice;
VoiceManager vm = VoiceManager.getInstance();
voice = vm.getVoice("mbrola_us1");
voice.allocate();
try{
String directoryPath = audioDir+fileName;
audioPlayer = new SingleFileAudioPlayer(directoryPath,Type.WAVE);
voice.setAudioPlayer(audioPlayer);
voice.speak(text);
voice.deallocate();
audioPlayer.close();
}
catch(Exception e){
e.printStackTrace();
}
}
I found Mbrola binary for CentOs from following location -
http://rpm.pbone.net/index.php3/stat/4/idpl/30430620/dir/centos_7/com/mbrola-301h-7.1.x86_64.rpm.html#content
Steps to follow -
1. Download the following rpm
ftp.gwdg.de mbrola-301h-7.1.x86_64.rpm
run > rpm -ivh mbrola-301h-7.1.x86_64.rpm. This will install mbrola binary into /usr/bin.
Copy /usr/bin/mbrola to your preferred location and set mbrola.base to it as - System.setProperty("mbrola.base", Constants.mbrolaDiskPath);
done.

Gulp task to SSH and then mysqldump

So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf