I'm using the javascript SDK of Microsoft Speech Synthesizer and calling speakTextAsync to convert text to speech.
This works perfectly, but sometimes the text is long and I want to be able to cancel in the middle, but I cannot find any way to do this. The documentation doesn't seem to indicate any way to cancel. The name speakTextAsync suggests that it returns a Task that could be cancelled, but in fact the method returns undefined, and I can't find any other way to do this. How can this be done?
Seems there is no way to stop it when it is speaking. But actually,as a workaround, you can just download the audio file and play the file yourself so that you can control everything. try the code below:
import com.microsoft.cognitiveservices.speech.*;
import com.microsoft.cognitiveservices.speech.audio.AudioConfig;
import java.nio.file.*;
import java.io.*;
import javax.sound.sampled.*;
public class TextToSpeech {
public static void main(String[] args) {
try {
String speechSubscriptionKey = "key";
String serviceRegion = "location";
String audioTempPath = "d://test.wav"; //temp file location
SpeechConfig config = SpeechConfig.fromSubscription(speechSubscriptionKey, serviceRegion);
AudioConfig streamConfig = AudioConfig.fromWavFileOutput(audioTempPath);
SpeechSynthesizer synth = new SpeechSynthesizer(config, streamConfig);
String filePath = "....//test2.txt"; // .txt file for test with long text
Path path = Paths.get(filePath);
String text = Files.readString(path);
synth.SpeakText(text);
Thread thread = new Thread(new Speaker(audioTempPath));
thread.start();
System.out.println("play audio for 8s...");
Thread.sleep(8000);
System.out.println("stop play audio");
thread.stop();
} catch (Exception ex) {
System.out.println("Unexpected exception: " + ex);
assert (false);
System.exit(1);
}
}
}
class Speaker implements Runnable {
private String path;
public String getText(String path) {
return this.path;
}
public Speaker(String path) {
this.path = path;
}
public void run() {
try {
File file = new File(path);
AudioInputStream stream;
AudioFormat format;
DataLine.Info info;
Clip clip;
stream = AudioSystem.getAudioInputStream(file);
format = stream.getFormat();
info = new DataLine.Info(Clip.class, format);
clip = (Clip) AudioSystem.getLine(info);
clip.open(stream);
clip.start();
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
What tools are out there for managing configuration files (per environment) for Kotlin/Javalin applications?
Alternatives to Konf (https://github.com/uchuhimo/konf)?
Here's an example "ConfigTool". You could expand this to create multiple objects and parse multiple files. I can't take full credit for it, I originally adapted it from someone else:
package app.utils;
import app.pojos.DBProps;
import app.pojos.WebAppVars;
import java.io.FileInputStream;
import java.util.Properties;
public final class ConfigTool {
//Create a connection parameter array for storing... connection parameters
private static final DBProps sysParams = new DBProps();
public static void load() {
//Read configuration properties for where the DB is and how to connect, close program if catch exception
Properties cfg_props = new Properties();
try (FileInputStream configfile = new FileInputStream(WebAppVars.configLocation)) {
cfg_props.load(configfile);
sysParams.setDbAddr(cfg_props.getProperty("database.dbAddr"));
sysParams.setDbUser(cfg_props.getProperty("database.dbUser"));
sysParams.setDbPass(cfg_props.getProperty("database.dbPass"));
sysParams.setDbType(cfg_props.getProperty("database.dbType"));
sysParams.setDbName(cfg_props.getProperty("database.dbName"));
sysParams.setDbAuth(cfg_props.getProperty("database.dbAuth"));
//TODO Add more if statements to config tool to support larger array of SQL Servers
if(sysParams.getDbType().equalsIgnoreCase("MSSQL")){
sysParams.setDbConnStr("jdbc:sqlserver://" + sysParams.getDbAddr() + ";databaseName=" + sysParams.getDbName());
sysParams.setDbDriver("com.microsoft.jdbc.sqlserver.SQLServerDriver");
}
}
catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
public static DBProps getSysParams(){
return sysParams;
}
}
Details : When i hit a URL it gives me a jsp page which needs to be converted in to a PDF. for now am using ITextPDF in java component for reading the jsp and writing it as PDF. Is there any alternative process within mule without using ITextPDF.
import java.io.BufferedReader;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import org.mule.api.MuleEventContext;
import org.mule.api.lifecycle.Callable;
import org.mule.api.transport.PropertyScope;
import com.itextpdf.text.Chunk;
import com.itextpdf.text.Document;
import com.itextpdf.text.PageSize;
import com.itextpdf.text.pdf.PdfWriter;
public class PDFConversion implements Callable {
private final String USER_AGENT = "Mozilla/5.0";
StringBuffer response = new StringBuffer();
ByteArrayOutputStream bos;
byte[] result;
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
try {
String productid="";
String vertical="";
String postcode="";
String metertype="";
String includeSolar="";
productid=eventContext.getMessage().getInvocationProperty("productId");
vertical=eventContext.getMessage().getInvocationProperty("vertical");
postcode=eventContext.getMessage().getInvocationProperty("postcode");
metertype=eventContext.getMessage().getInvocationProperty("metertype");
includeSolar=eventContext.getMessage().getInvocationProperty("includeSolar");
String url = "http://191.111.0.111:****/PDFService/electricitypdf?productid="+productid+"&vertical="+vertical+"&postcode="+postcode+"&metertype="+"&includeSolar="+includeSolar;
System.out.println(" URL -Request-----"+url);
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
// optional default is GET
con.setRequestMethod("GET");
// add request header
con.setRequestProperty("User-Agent", USER_AGENT);
int responseCode = con.getResponseCode();
System.out.println("\nSending 'GET' request to URL : " + url);
System.out.println("Response Code : " + responseCode);
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
String inputLine;
bos = new ByteArrayOutputStream();
int next = in.read();
while (next > -1) {
bos.write(next);
next = in.read();
}
bos.flush();
/ ByteArrayOutputStream buffer = new ByteArrayOutputStream(); byte[] bytes = IOUtils.toByteArray(in); while ((inputLine = in.readLine()) != null) { response.append(inputLine); }/ in.close();
} catch (Exception e) {
e.printStackTrace();
}
Document document = new Document(PageSize.LETTER, 0.75F, 0.75F, 0.75F, 0.75F);
document.setPageSize(PageSize.LETTER.rotate());
// PdfWriter.setPageEvent(new HeaderFooter(mmPDFDocument.right() -
// mmPDFDocument.left(), footer, path, headerType, unicodeFont));
PdfWriter.getInstance(document, bos);
document.open();
int numberpages=10;
for (int i = 1; i <= numberpages; i++)
{
document.add(new Chunk("Hello PDF Service - Page "+i));
document.newPage();
}
FileOutputStream fos= new FileOutputStream("energy.pdf");
fos.write(bos.toByteArray());
fos.close();
document.close();
return bos;
}
}
Despite writing this the browser is unable to display the content as PDF i could see that the PDF file is being written and when the URL is hit it generates a file of no type(not a pdf). any help appreciated !!
Your solution is basically a standard Java Application running on Mule ESB. Mule is for message based communication and transformations. If you wanted to turn your solution into a full "Mule Solution" I would recommend....
Use an http:request directive to call the jsp page to return back html.
Example here: Mule ESB : Read HTML
Write a transformer to transform the text/html message to a byte array (or some other type) this will have to be done using ITextPDF.
Output the byte array to a file.
You basically are doing the correct steps but not using any advantages of the Mule platform. When you separate the solution out this way you will be able to write the pdf file easily and not get any strange results.
We are experiencing problems while working with Swisscom S3 Dynamic Storage.
When making concurrent test CRUD requests in 5 and more parrallel threads, Storage service sends us randomly 403 Forbidden instead of corrent answer. While executing same requests sequentially, one-by-one, everything works ok.
Code that I am using is here below
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.*;
import com.amazonaws.util.StringInputStream;
import org.apache.commons.io.IOUtils;
import org.junit.Test;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
/**
* Tutorial https://javatutorial.net/java-s3-example
*/
public class AmazonS3ManualTest {
public static final String BUCKET_NAME = "??";
private static String accessKey = "??";
private static String secretKey = "??";
#Test
public void testOperations() throws IOException, InterruptedException {
final int maxCount = 5;
final AmazonS3Client amazonS3Client = getS3Client();
final CountDownLatch latch = new CountDownLatch(maxCount);
final ExecutorService executor = Executors.newFixedThreadPool(maxCount);
for (int i = 0; i < maxCount; i++) {
final int index = i;
executor.submit(() -> {
try {
final String FolderOne = "testFolderOne" + index;
final String FolderTwo = "testFolderTwo" + index;
final String FolderCopy = "copyFolder" + index;
try {
createFile(amazonS3Client, "/" + FolderOne + "/file.txt");
createFolder(amazonS3Client, FolderTwo + "/");
exists(amazonS3Client, FolderOne + "/file.txt");
exists(amazonS3Client, FolderTwo + "/");
copy(amazonS3Client, FolderOne + "/file.txt", FolderCopy + "/filecopy.txt");
delete(amazonS3Client, "/" + FolderOne);
delete(amazonS3Client, "/" + FolderTwo);
get(amazonS3Client, FolderCopy + "/filecopy.txt");
delete(amazonS3Client, "/" + FolderCopy + "/filecopy.txt");
isEmptyFolder(amazonS3Client, "/" + FolderCopy);
delete(amazonS3Client, "/ + FolderCopy");
} catch (Exception e) {
e.printStackTrace();
}
latch.countDown();
} catch (final Exception ignored) {
}
});
}
if (!latch.await(300, TimeUnit.SECONDS)) {
throw new RuntimeException("Waiting too long for the result");
}
}
private void isEmptyFolder(AmazonS3Client amazonS3Client, String folder) {
final ObjectListing objectListing = amazonS3Client.listObjects(BUCKET_NAME, folder);
assert(objectListing.getObjectSummaries().isEmpty());
}
private void get(AmazonS3Client amazonS3Client, String file) throws IOException {
GetObjectRequest request = new GetObjectRequest(BUCKET_NAME, file);
final S3Object object = amazonS3Client.getObject(request);
final S3ObjectInputStream objectContent = object.getObjectContent();
final String s = IOUtils.toString(objectContent);
assert(s.length() > 0);
}
private void copy(AmazonS3Client amazonS3Client, String source, String target) {
CopyObjectRequest request = new CopyObjectRequest(BUCKET_NAME, source, BUCKET_NAME, target);
amazonS3Client.copyObject(request);
}
private void delete(AmazonS3Client amazonS3Client, String path) {
deleteRecursive(amazonS3Client, path);
}
private void deleteRecursive(AmazonS3Client amazonS3Client, String path) {
ObjectListing objects = amazonS3Client.listObjects(BUCKET_NAME, path);
for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
if (objectSummary.getKey().equals(path)) {
continue;
}
if (objectSummary.getKey().endsWith("/")) {
deleteRecursive(amazonS3Client, objectSummary.getKey());
} else {
amazonS3Client.deleteObject(BUCKET_NAME, objectSummary.getKey());
}
}
amazonS3Client.deleteObject(BUCKET_NAME, path);
}
private void exists(AmazonS3Client amazonS3Client, String folder) {
GetObjectMetadataRequest request = new GetObjectMetadataRequest(BUCKET_NAME, folder);
try {
final ObjectMetadata objectMetadata = amazonS3Client.getObjectMetadata(request);
assert(objectMetadata != null);
} catch (AmazonS3Exception e) {
if (e.getMessage().contains("404")) {
assert(false);
return;
}
}
assert(true);
}
private void createFolder(AmazonS3Client amazonS3Client, String folder) {
final InputStream input = new ByteArrayInputStream(new byte[0]);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
amazonS3Client.putObject(new PutObjectRequest(BUCKET_NAME, folder, input, metadata));
}
private void createFile(AmazonS3Client amazonS3Client, String fileName) throws IOException {
ObjectMetadata omd = new ObjectMetadata();
//omd.setContentType("html/text");
omd.setHeader("filename", fileName);
omd.setHeader("x-amz-server-side-encryption", "AES256");
// upload file to folder and set it to public
final StringInputStream testFile = new StringInputStream("Test");
final PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET_NAME, fileName, testFile, omd);
amazonS3Client.putObject(putObjectRequest.withCannedAcl(CannedAccessControlList.Private));
testFile.close();
}
private AmazonS3Client getS3Client() {
ClientConfiguration opts = new ClientConfiguration();
opts.setSignerOverride("S3SignerType"); // NOT "AWS3SignerType"
opts.setMaxConnections(100);
final AmazonS3Client s3 = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey), opts);
s3.setEndpoint("ds31s3.swisscom.com");
return s3;
}
}
Exception we are getting is here:
com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: null), S3 Extended Request ID: null
Can you please recommend, what can we do with such a situation, because it's abnormal and not scalable.
I recreated new Dynamic Storage S3 and rerun test above. Now exceptions were not raising. Seems there was infra problem with previously created Storage.
We run your snippet 80 times in a row against the S3 Dynamic Storage of Swisscom and couldn't reproduce the issue.
However there might be a timing issue when directly accessing the object after uploading it. The PUT request may be balanced to another node than the GET request. So if you immediately want to download an object after uploading it, please implement a short sleep or a retry.
I am trying to run gzip with the --rsyncable option - it works fine when run it in a terminal window (I am on Mac OS) - but it does not work when I run it from java with the following code.
Any idea what the problem could be?
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
public class RsyncIssue {
public static void printOutput(Process p) throws IOException{
String ss;
BufferedReader inReader = new BufferedReader(new InputStreamReader(p.getInputStream()));
while((ss = inReader.readLine()) != null ){
System.out.println("[IN] "+ ss);
}
}
public static void main(String[] args) throws IOException {
File f = new File("Test.ext");
if( !f.exists())
{
f.createNewFile();
}
String zipCommand = "gzip --rsyncable " + f.getCanonicalPath();
System.out.println("Zipping file : " + f.getName() );
System.out.println("Zipping command: " + zipCommand);
Process p = Runtime.getRuntime().exec(zipCommand);
printOutput(p);
File zipfile = new File(f.getCanonicalPath()+".gz");
if( !zipfile.exists())
{
throw new RuntimeException("zip file does not exist " + zipfile.getAbsolutePath());
}
}
}
Things you should do:
Call p.waitFor() so that execution blocks until the execution has completed.
Then call p.exitValue() and compare to 0 to see if there was an error.
In printOutput also read the process's getErrorStream() to view any errors.
Check that --rsyncable is available on your platform, it is not a universally-available gzip option.