How to use Google translate for free? Maybe you have some analogs [duplicate] - api

If I pass a string (either in English or Arabic) as an input to the Google Translate API, it should translate it into the corresponding other language and give the translated string to me.
I read the same case in a forum but it was very hard to implement for me.
I need the translator without any buttons and if I give the input string it should automatically translate the value and give the output.
Can you help out?

You can use google script which has FREE translate API. All you need is a common google account and do these THREE EASY STEPS.
1) Create new script with such code on google script:
var mock = {
parameter:{
q:'hello',
source:'en',
target:'fr'
}
};
function doGet(e) {
e = e || mock;
var sourceText = ''
if (e.parameter.q){
sourceText = e.parameter.q;
}
var sourceLang = '';
if (e.parameter.source){
sourceLang = e.parameter.source;
}
var targetLang = 'en';
if (e.parameter.target){
targetLang = e.parameter.target;
}
var translatedText = LanguageApp.translate(sourceText, sourceLang, targetLang, {contentType: 'html'});
return ContentService.createTextOutput(translatedText).setMimeType(ContentService.MimeType.JSON);
}
2) Click Publish -> Deploy as webapp -> Who has access to the app: Anyone even anonymous -> Deploy. And then copy your web app url, you will need it for calling translate API.
3) Use this java code for testing your API:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
public class Translator {
public static void main(String[] args) throws IOException {
String text = "Hello world!";
//Translated text: Hallo Welt!
System.out.println("Translated text: " + translate("en", "de", text));
}
private static String translate(String langFrom, String langTo, String text) throws IOException {
// INSERT YOU URL HERE
String urlStr = "https://your.google.script.url" +
"?q=" + URLEncoder.encode(text, "UTF-8") +
"&target=" + langTo +
"&source=" + langFrom;
URL url = new URL(urlStr);
StringBuilder response = new StringBuilder();
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestProperty("User-Agent", "Mozilla/5.0");
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
in.close();
return response.toString();
}
}
As it is free, there are QUATA LIMITS: https://docs.google.com/macros/dashboard

Use java-google-translate-text-to-speech instead of Google Translate API v2 Java.
About java-google-translate-text-to-speech
Api unofficial with the main features of Google Translate in Java.
Easy to use!
It also provide text to speech api. If you want to translate the text "Hello!" in Romanian just write:
Translator translate = Translator.getInstance();
String text = translate.translate("Hello!", Language.ENGLISH, Language.ROMANIAN);
System.out.println(text); // "Bună ziua!"
It's free!
As #r0ast3d correctly said:
Important: Google Translate API v2 is now available as a paid service. The courtesy limit for existing Translate API v2 projects created prior to August 24, 2011 will be reduced to zero on December 1, 2011. In addition, the number of requests your application can make per day will be limited.
This is correct: just see the official page:
Google Translate API is available as a paid service. See the Pricing and FAQ pages for details.
BUT, java-google-translate-text-to-speech is FREE!
Example!
I've created a sample application that demonstrates that this works. Try it here: https://github.com/IonicaBizau/text-to-speech

Generate your own API key here. Check out the documentation here.
You may need to set up a billing account when you try to enable the Google Cloud Translation API in your account.
Below is a quick start example which translates two English strings to Spanish:
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.json.gson.GsonFactory;
import com.google.api.services.translate.Translate;
import com.google.api.services.translate.model.TranslationsListResponse;
import com.google.api.services.translate.model.TranslationsResource;
public class QuickstartSample
{
public static void main(String[] arguments) throws IOException, GeneralSecurityException
{
Translate t = new Translate.Builder(
GoogleNetHttpTransport.newTrustedTransport()
, GsonFactory.getDefaultInstance(), null)
// Set your application name
.setApplicationName("Stackoverflow-Example")
.build();
Translate.Translations.List list = t.new Translations().list(
Arrays.asList(
// Pass in list of strings to be translated
"Hello World",
"How to use Google Translate from Java"),
// Target language
"ES");
// TODO: Set your API-Key from https://console.developers.google.com/
list.setKey("your-api-key");
TranslationsListResponse response = list.execute();
for (TranslationsResource translationsResource : response.getTranslations())
{
System.out.println(translationsResource.getTranslatedText());
}
}
}
Required maven dependencies for the code snippet:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-translate</artifactId>
<version>LATEST</version>
</dependency>
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client-gson</artifactId>
<version>LATEST</version>
</dependency>

I’m tired of looking for free translators and the best option for me was Selenium (more precisely selenide and webdrivermanager) and https://translate.google.com
import io.github.bonigarcia.wdm.ChromeDriverManager;
import com.codeborne.selenide.Configuration;
import io.github.bonigarcia.wdm.DriverManagerType;
import static com.codeborne.selenide.Selenide.*;
public class Main {
public static void main(String[] args) throws IOException, ParseException {
ChromeDriverManager.getInstance(DriverManagerType.CHROME).version("76.0.3809.126").setup();
Configuration.startMaximized = true;
open("https://translate.google.com/?hl=ru#view=home&op=translate&sl=en&tl=ru");
String[] strings = /some strings to translate
for (String data: strings) {
$x("//textarea[#id='source']").clear();
$x("//textarea[#id='source']").sendKeys(data);
String translation = $x("//span[#class='tlid-translation translation']").getText();
}
}
}

You can use Google Translate API v2 Java. It has a core module that you can call from your Java code and also a command line interface module.

Related

Selenium with TestRail Integration with latest version

I am using gurock API to get the test case Status from Test Rail
The below will return the status of TC.. I will provide trRunID in the pom.xml. and TCname will be taken using method Name.
public static int FetchTestRailResult(String trRunId, String TCName, String trusername, String trpassword )
throws MalformedURLException, IOException, APIException {
int val=0;
APIClient client = new APIClient($testRailurl);
client.setUser(trusername);
client.setPassword(trpassword);
;
JSONArray array = (JSONArray) client.sendGet("get_tests/"+trRunId+"&status_id=1");
for (int i = 0; i < array.size(); i++) {
JSONObject c = (JSONObject) (array.get(i));
String testrailTestCaseName=c.get("title").toString().split("_")[1];
if (testrailTestCaseName.equals(TCName)) {
val=1;
break;
}
}
return val;
}
The below will update the results.
public static void UpdateResultToTestRail(String trusername, String trpassword, String trRunId,String testCaseName,String status, String testStepDetails)
throws MalformedURLException, IOException, APIException {
APIClient client = new APIClient($testrailurl);
client.setUser(trusername);
client.setPassword(trpassword);
HashMap data = new HashMap();
data.put("status_id", status);
data.put("comment", testStepDetails);
JSONArray array = (JSONArray) client.sendGet("get_tests/"+trRunId);
//System.out.println(array.size());
for (int i = 0; i < array.size(); i++) {
JSONObject c = (JSONObject) (array.get(i));
String testrailTestCaseName=c.get("title").toString().split("_")[1];
if (testrailTestCaseName.equals(testCaseName)) {
System.out.println(c.get("id"));
client.sendPost("add_result/" + c.get("id"), data);
break;
}
}
}
I am now migrating to maven and Now it has dependency
<!-- https://mvnrepository.com/artifact/com.codepine.api/testrail-api-java-client -->
<dependency>
<groupId>com.codepine.api</groupId>
<artifactId>testrail-api-java-client</artifactId>
<version>2.0.1</version>
</dependency>
It does not have the api methods and it has Builder and build but further could not able to check connection is successful or not.. Anyone used testrail in Maven?
I haven't used that library, but it looks fairly easy to use it and they have some docs on their githib project page: https://github.com/codepine/testrail-api-java-client
For your use case, I think you just need to do the following:
TestRail testRail = TestRail.builder("https://some.testrail.net/", "username", "password");
Tests tests = testRail.tests();
List<Test> lst = tests.list(runId).execute();
//filter it based on your conditions
I did not run the code - just composed it, so it might have some issues, but should give you an idea on how to use the library.
Please note, that as of Feb 26, TestRail is changing their HTTP response for bulk requests (like cases, tests, projects, etc), so I'm not sure if that library will still work with the next TR version - you will need to check it.
P.S. We are developing some set of products for integration with TestRail, so you might want to look at them. If you are interested, please check out our products:
https://www.agiletestware.com/pangolin
https://www.agiletestware.com/firefly
Based on your testing framework (JUnit of TestNG), try to use one of these libs:
TestRail-JUnit
TestRail-TestNG
Both of them have Medium articles on how to integrate it just in a few steps (see README.md there)

Why WordNet and JWI stemmer gives "ord" and "orde" in result of "order" stemming?

I'm working on a project using WordNet and JWI 2.4.0.
Currently, I'm putting a lot of words within the included stemmer, it seems to work, until I asked for "order".
The stemmer answers me that "order", "orde", and "ord", are the possible stems of "order".
I'm not a native english speaker, but... I never saw the word "ord" in my life... and when I asked the WordNet dictionary for this definition : obviously there is nothing. (in BabelNet online, I found that it is a Nebraska's town !)
Well, why is there this strange stem ?
How can I filter the stems that are not present in the WordNet dictionary ? (because when I re-use the stemmed words, "orde" is making the program crash)
Thank you !
ANSWER : I didn't understood well what was a stem. So, this question has no sense.
Here is some code to test :
package JWIExplorer;
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.util.Arrays;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
import edu.mit.jwi.Dictionary;
import edu.mit.jwi.IDictionary;
import edu.mit.jwi.morph.WordnetStemmer;
public class TestJWI
{
public static void main(String[] args) throws IOException
{
List<String> WordList_Research = Arrays.asList("dog", "cat", "mouse");
List<String> WordList_Research2 = Arrays.asList("order");
String path = "./" + File.separator + "dict";
URL url;
url = new URL("file", null, path);
System.out.println("BEGIN : " + new Date());
for (Iterator<String> iterstr = WordList_Research2.iterator(); iterstr.hasNext();)
{
String str = iterstr.next();
TestStem(url, str);
}
System.out.println("END : " + new Date());
}
public static void TestStem(URL url, String ResearchedWord) throws IOException
{
// construct the dictionary object and open it
IDictionary dict = new Dictionary(url);
dict.open();
// First, let's check for the stem word
WordnetStemmer Stemmer = new WordnetStemmer(dict);
List<String> StemmedWords;
// null for all words, POS.NOUN for nouns
StemmedWords = Stemmer.findStems(ResearchedWord, null);
if (StemmedWords.isEmpty())
return;
for (Iterator<String> iterstr = StemmedWords.iterator(); iterstr.hasNext();)
{
String str = iterstr.next();
System.out.println("Local stemmed iteration on : " + str);
}
}
}
Stems do not necessarily need to be words by themselves. "Order" and "Ordinal" share the stem "Ord".
The fundamental problem here is that stems are related to spelling, but language evolution and spelling are only weakly related (especially in English). As a programmer, we'd much rather describe a stem as a regex, e.g. ^ord[ie]. This captures that it's not the stem of "ordained"

Zapi API - Getting error Expecting claim 'qsh' to have value

I just try to fetch general information from zapi api, but getting error
Expecting claim 'qsh' to have value '7f0d00c2c77e4af27f336c87906459429d1074bd6eaabb81249e1042d4b84374' but instead it has the value '1c9e9df281a969f497d78c7636abd8a20b33531a960e5bd92da0c725e9175de9'
API LINK : https://prod-api.zephyr4jiracloud.com/connect/public/rest/api/1.0/config/generalinformation
can anyone help me please.
The query string parameters must be sorted in alphabetical order, this will resolve the issue.
Please see this link for reference:
https://developer.atlassian.com/cloud/bitbucket/query-string-hash/
I can definitely help you with this. You need to generate the JWT token in the right way.
package com.thed.zephyr.cloud.rest.client.impl;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.ParseException;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.util.EntityUtils;
import com.thed.zephyr.cloud.rest.ZFJCloudRestClient;
import com.thed.zephyr.cloud.rest.client.JwtGenerator;
public class JWTGenerator {
public static void main(String[] args) throws URISyntaxException, IllegalStateException, IOException {
String zephyrBaseUrl = "https://prod-api.zephyr4jiracloud.com/connect";
String accessKey = "TYPE YOUR ACCESS KEY-GET IT FROM ZEPHYR";
String secretKey = "TYPE YOUR SECRET KEY-GET IT FROM ZEPHYR";
String userName = "TYPE YOUR USER - GET IT FROM ZEPHYR/JIRA";
ZFJCloudRestClient client = ZFJCloudRestClient.restBuilder(zephyrBaseUrl, accessKey, secretKey, userName).build();
JwtGenerator jwtGenerator = client.getJwtGenerator();
String createCycleUri = zephyrBaseUrl + "/public/rest/api/1.0/cycles/search?versionId=<TYPE YOUR VERSION ID HERE>&projectId=<TYPE YOUR PROJECT ID HERE>";
URI uri = new URI(createCycleUri);
int expirationInSec = 360;
String jwt = jwtGenerator.generateJWT("GET", uri, expirationInSec);
//String jwt = jwtGenerator.generateJWT("PUT", uri, expirationInSec);
//String jwt = jwtGenerator.generateJWT("POST", uri, expirationInSec);
System.out.println("FINAL API : " +uri.toString());
System.out.println("JWT Token : " +jwt);
}
}
Also clone this repository: https://github.com/zephyrdeveloper/zfjcloud-rest-api which will give you all methods where respective encodings are there. You can build a Maven project to have these dependencies directly imported.
*I also spent multiple days to figure it out, so be patient and it's only the time till you generate right JWT.

Lucene 4.1 : How split words that contains "dots" when indexing?

I'l trying to figure out what I should do to index my keywords that contains "." .
ex : this.name
I want to index the terms : this and name in my index.
I use the StandardAnalyser. I try to extends the WhitespaceTokensizer or extends TokenFilter, but I'm not sure if I'm in the right direction.
if I use the StandardAnalyser, I'll obtain "this.name" as a keyword, and that's not what I want, but the analyser do the rest correctly for me.
You can put a CharFilter in front of StandardTokenizer that converts periods and underscores to spaces. MappingCharFilter will work.
Here's MappingCharFilter added to a stripped-down StandardAnalyzer (see the original 4.1 version here):
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.charfilter.MappingCharFilter;
import org.apache.lucene.analysis.charfilter.NormalizeCharMap;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopAnalyzer;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;
import org.apache.lucene.util.Version;
import java.io.IOException;
import java.io.Reader;
public final class MyAnalyzer extends StopwordAnalyzerBase {
private int maxTokenLength = 255;
public MyAnalyzer() {
super(Version.LUCENE_41, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
}
#Override
protected TokenStreamComponents createComponents
(final String fieldName, final Reader reader) {
final StandardTokenizer src = new StandardTokenizer(matchVersion, reader);
src.setMaxTokenLength(maxTokenLength);
TokenStream tok = new StandardFilter(matchVersion, src);
tok = new LowerCaseFilter(matchVersion, tok);
tok = new StopFilter(matchVersion, tok, stopwords);
return new TokenStreamComponents(src, tok) {
#Override
protected void setReader(final Reader reader) throws IOException {
src.setMaxTokenLength(MyAnalyzer.this.maxTokenLength);
super.setReader(reader);
}
};
}
#Override
protected Reader initReader(String fieldName, Reader reader) {
NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder();
builder.add(".", " ");
builder.add("_", " ");
NormalizeCharMap normMap = builder.build();
return new MappingCharFilter(normMap, reader);
}
}
Here's a quick test to demonstrate it works:
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.BaseTokenStreamTestCase;
public class TestMyAnalyzer extends BaseTokenStreamTestCase {
private Analyzer analyzer = new MyAnalyzer();
public void testPeriods() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"this.name; here.i.am; sentences ... end with periods.",
new String[] { "name", "here", "i", "am", "sentences", "end", "periods" } );
}
public void testUnderscores() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"some_underscore_term _and____ stuff that is_not in it",
new String[] { "some", "underscore", "term", "stuff" } );
}
}
If I understand you correctly, you need to use a tokenizer that removes dots -- that is, any name that contains a dot should be split at that point ("here.i.am" becomes "here" + "i" + "am").
you are getting caught by behavior documented here:
However, a dot that's not followed by whitespace is considered part of a token.
StandardTokenizer introduces some more complex to parsing rules than you may not be looking for. This one, in particular, is intended to prevent tokenization of URLs, IPs, idenifiers, etc. A simpler implementation might suit your needs, like LetterTokenizer.
If that doesn't really suit your needs (and it might well turn out to be throwing the baby out with the bathwater), then you may need to modify StandardTokenizer yourself, which is explicitly encouraged by the Lucene docs:
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
Sebastien Dionne: I didn't understand how to split a word, do I have to parse the document char by char ?
Sebastien Dionne: I still want to know how to split a token into multiple part, and index them all
You may have to write a custom analyzer.
Analyzer is a combination of Tokenizer and possibly a chain of TokenFilter instances.
Tokenizer : Takes in the input text passed by you probably as a java.io.Reader. It
JUST breakdowns the text. Doesn't alter, just breaks it down.
TokenFilter : Takes in the token emitted by Tokenizer, adds / removes / alters tokens and emits the same one by one until all are finished.
If it replaces a token with multiple tokens based on requirements, buffers all, emits them one by one to the Indexer.
You may check following resource, unfortunately, you may have to sign-up for a trial membership.
By writing a custom analyzer, you can breakdown the text the way you want to. You may even use some existing components like LowercaseFilter. Fortunately, it is achievable with Lucene to come up with some Analyzer that serves your purpose if you couldn't find that as a built-in or on the web.
" Writing Custom Filters: Lucene in Action 2"

How To Use Groovy HTTPBuilder To Get Stories from AgileZen?

I would like to pull stories from Agile Zen using their REST API.
I read:
http://help.agilezen.com/kb/api/overview
http://help.agilezen.com/kb/api/security
Also, I got this to work: http://groovy.codehaus.org/HTTP+Builder
How would one combine the above in order to get Groovy client code to access AgileZen stories?
Here is the code sample which makes one story with id of 1 show up for a specific project whose id is 16854:
import groovyx.net.http.HTTPBuilder
import static groovyx.net.http.Method.GET
import static groovyx.net.http.ContentType.JSON
public class StoryGetter {
public static void main(String[] args) {
new StoryGetter().getStories()
}
void getStories() {
// http://agilezen.com/project/16854/story/4
// /api/v1/project/16854/story/2
def http = new HTTPBuilder( 'http://agilezen.com' )
http.request( GET, JSON ) {
uri.path = '/api/v1/project/16854/story/1'
headers.'X-Zen-ApiKey' = 'PUT YOUR OWN API KEY HERE'
response.success = { resp, json ->
println "json size is " + json.size()
println json.toString()
}
}
}
}
I had to put in a fake API key in this post since I should not share my API key.
(By the way, this is not using SSL. A follow up question in regards to doing this for a SSL enabled project may come soon.)