I am using an OOTB service in Moqui for eg, say any service from PartyServices.xml then the message that are displayed comes from the services itself but I want a minor modification in the message that is displayed so for that I am currently overriding the service in my component only to change the message.
So I wanted to know that am I following best practice ? or is there any other way to do the same ?
You can use the automatic internationalization for this. Just add a record for the LocalizedMessage entity with the message from the code as the "original", the desired locale (can be "en" even if the original is in English), and the desired message in the "localized" field.
Note that the "original" should be the actual text coming from the code. If it has a ${} string expansion that should be left as-is. In other words, the localization is done BEFORE string expansion so that the placeholders can be moved around as needed (or even changed...) to handle different languages or to customize messages.
The UI to add/edit l10n messages is in the Application => Tool => Localization => Messages screen.
On a side note, you can see the strings that are cached along with the locale used and the resulting localized string at runtime by looking at the "l10n.message" cache (in Application => Tool => System => Cache List).
Related
I am developing rest api update method for user profile resource user/profile. I am disappointed what http method should i use. Update contains some required attributes so it more PUT request, where client need to fill all attributes. But how it can extend attributes in future. If i will decide to add new attribute then it will automatically clear because client is not implement it yet.
But what if this new attribute has default value or is set by another route?
Can i use PUT with not stricting number of attributes and use old data if new isn't come in request. Or how it can be done normally?
HTTP is an application whose application domain is the transfer of documents over a network -- Webber, 2011.
PUT is the appropriate method to use when "saving" a new version of a document onto a web server.
how it can extend attributes in future.
You design your schemas to be forward and backward compatible; in practice, what this means is that you can add new optional elements with reasonable default values. When you need to add a new required element, you change the name of the schema.
You'll find prior art in this topic by searching XML literature for must ignore.
You understand correctly: PUT is for complete replacement, so values that you don't include would be lost.
Instead, use the PATCH method, which is for making partial updates. You can update only the properties you include values for.
I have some difficulties customizing the Aikau faceted search page on Alfresco, which may be more a matter of lack of my knowledge about dojo/AMD.
What I want to do is to replace the document details page URL by a download URL.
I extended the Search Results Widget to include my own custom module :
var searchResultWidget = widgetUtils.findObject(model.jsonModel, "id", "FCTSRCH_SEARCH_RESULT");
if(searchResultWidget) {
searchResultWidget.name = "mynamespace/search/CustomAlfSearchResult";
}
I understand search results URLs are rendered this way :
AlfSearchResult module => uses SearchResultPropertyLink module => mixins _SearchResultLinkMixin renderer => bring the "generateSearchLinkPayload" function => renders URLs depending on the result type
I want to override this "generateSearchLinkPayload" function but I can't figure out what is the best way to do that.
Thanks in advance for the help !
This answer assumes you're able to use the latest version of Aikau (at the time of writing this is 1.0.61). Older versions might require slightly different overriding...
In order to do this you're going to need to override the createDisplayNameRenderer function of AlfSearchResult in your CustomAlfSearchResult widget. This will allow you to create an extension of alfresco/search/SearchResultPropertyLink.
If you want to take advantage of the the download capabilities provided by the alfresco/services/DocumentService for downloading both documents and folders (as a zip) then you're going to want to change both the publishTopic and publishPayload of the SearchResultPropertyLink.
You should extend the getPublishTopic and generateSearchLinkPayload functions. For the getPublishTopic function you'll want to change the return value to be "ALF_SMART_DOWNLOAD" (there are constants available for these strings in the alfresco/core/topics module). This topic can be used to tell the DocumentService to take care of figuring out if the node is a folder or document and will make an XHR request for the full node metadata (in order to get the contentUrl attribute that is not included in the data returned by the Search API.
You should extend the generateSearchLinkPayload function so that for document or folder types the payload contains the attribute nodes that is a single array where the object is the search result object that you wish to download.
I would recommend that you call this.inherited first to get the default payload and only update it for documents and folders.
Hopefully that all makes sense - if not, add a comment and I'll try to provide further assistance!
This is the answer for 1.0.25.2 - unfortunately it's not quite so straightforward...
You still need to extend the alfresco/search/AlfSearchResult widget, however this time you need to extend the postCreate function (remembering to call this.inherited(arguments)). It's not possible to stop the original alfresco/search/SearchResultPropertyLink widget from being created... so it will be necessary to find it and destroy it.
The widget is not assigned to a variable, so it will be necessary to find it using dijit/registry. Use the byNode function from dijit/registry to find the widget assigned to this.nameNode and then call destroy on it (be sure to pass the argument true to preserve the DOM). However, you will then need to empty the DOM node so that you can start again...
Now you need to add in your extension to alfresco/search/SearchResultPropertyLink. Unfortunately, because the smart download capability is not available you'll need to do more work. The difference here is that you'll need to make an XHR request to retrieve the full node metadata in order to obtain the contentURL. It's possible to publish a request to the DocumentService(via the "ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST" topic). However, you need to be aware that having the XHR step will not allow you to then proceed with the download as is. Instead you'll need to use an iframe download solution, I'd suggest you take a look at the changes in the pull request we recently made to solve this problem and backport them into your own solution.
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
Given a simple page:
<form>
<input type="email">
<button>click</button>
</form>
If I enter anything in text field that is not e-mail and click the button, Please enter an email address message appears.
Is there a way to to check if the message appears using Selenium or Watir? As far as I see, nothing new appears in browser DOM.
Since the page is using e-mail check that is built in feature of a browser, does it even make sense to check that error message appears? It is at the same level as checking if browser scroll bar works. We are no longer checking the web application, but the platform (browser).
An earlier related question here on SO is: How do I test error conditions in HTML5 pages with cucumber?
I'd agree that this is starting to get close to 'testing the browser', otoh if your code depends on that browser feature, then the site needs to produce the proper html(5) code so that the browser knows you want that level of validation, and that is something you can validate.
Some good background on this at art of the web
The browser side validation is triggered by the combination of the specific input type (which you could check via watir's .type method) and the new required attribute (which might be tricker to check) Given this I think I actually see a pretty good cause here for a new feature in Watir. I'm thinking we could use a .required? method which should be supported for all input elements which support the new 'required' attribute, and would return true if that 'required' attribute is present.
So at the moment what you could do is if you know you are running on an HTML5 browser that supports this feature, you could both check that the input type is 'email' and also that the 'required' attribute is there. (I don't offhand have an idea for how to do that, but perhaps someone else can suggest a way).
The other thing to be sure of would be to provide an invalid value, and make sure that the form will not allow itself to be submitted. e.g. is the validation enforced, or merely advisory. (and don't forget to check the server side by submitting invalid data anyway using something like curl or httparty, since a hostile user could easily bypass any inbrowser validation and submit that form with bogus or even 'hostile' values designed to cause a buffer overflow or injection attack. No site should depend exclusively on client side validation.)
The other thing to consider of course is what if the user is on a browser that does NOT support that particular validation feature, what does the page do then? I'd presume some kind of client side javascript and a message that is displayed.
In that case it really depends on the way in which the message is made to 'appear'. In the app I am testing, messages of that sort happen to be in a handy divs with unique classes which are controlled by css to normally be hidden, and then set to displayed when the client side JS detects the need to display a message for the user. So of course I have tests for stuff like that. Here's an example of one for someone agreeing to our terms and conditions. (Note: we use Cucumber, and the Test-Factory gem to do page objects and data objects)
Lets start with the message, and what right-click.examine-element reveals
The scenario on the grower_selfregister.feature file looks like this
Scenario: self registering grower is required to agree to terms and conditions
Given I am a unregistered grower visiting www.climate.com
When I try to self-register without agreeing to the terms and conditions
Then I am informed that Acceptance of the terms and conditions is required
And I remain on the register page
The critical Cucumber steps are:
When /^I try to self-register without agreeing to the terms and conditions$/ do
#my_user.self_register(:agree_to_terms => FALSE)
end
Then /^I am informed that Acceptance of the terms and conditions is required$/ do
on Register do |page|
page.agree_to_terms_required_message.should be_visible
end
end
Then /^I remain on the register page$/ do
on Register do |page|
#ToDo change this to checking the page title (or have the page object do it) once we get titles
page.url.should == "#{$test_site}/preso/register.html"
end
end
The relevant portions of the register.rb page object are
class Register < BasePage
#bunch of other element definitions removed for brevity
element(:agree_to_terms_required_message) {|b|b.div(:class => "terms-error")}
end
Does that help provide an example?
Note: it could easily be argued that the second validation (staying on the page) is redundant since I'm unlikely to see the message if I'm not still on the page. But it adds clarity to the expected user experience described in the scenario Also, it could be very important if the implemntation where to change, for example if it was decided to use something like an javascript alert, then it may well be important to validate that once dismissed (something the "I see the message" step would likely do) the user did not proceed into the site anyway.
You could try like this
JavascriptExecutor js = (JavascriptExecutor) driver;
driver.findElement(By.cssSelector("input[type='email']")).sendKeys("asd");
Object s=js.executeScript("return document.getElementById(\"a\").validity.valid");
System.out.println(s);
driver.findElement(By.cssSelector("input[type='email']")).sendKeys("asd#gmail.com");
s=js.executeScript("return document.getElementById(\"a\").validity.valid");
System.out.println(s);
Output for above line is
false
true
So based on that you can conclude that whether the given value is valid or not.
Below javascript logic will return true/false based on the validity of the field
document.getElementById(\"a\").validity.valid
P.s : I assume input tag has id "a"
I haven't found a way to check for the actual error message. However, you can check that the input field is invalid after entering a non-email, using the :invalid CSS pseudo-selector.
WebElement invalidInput = driver.findElement(By.cssSelector("input:invalid"));
assertThat(invalidInput.isDisplayed(), is(true));
Rails 3.0.10 and activemerchant gem 1.29.3
My app works fine in sandbox, but transactions in production mode are failing with "Security header is not valid", "ErrorCode"=>"10002"
We initiated a support request with paypal, after reviewing all the configuration parameters a million times and they feel we're hitting an incorrect endpoint. They've asked for a full trace for the transaction, including headers, etc, so I'm trying to figure out how to do that. I found this article
which suggested adding this to the config block
ActiveMerchant::Billing::PaypalGateway.wiredump_device = File.new(File.join([Rails.root, "log", "paypal.log"]), "a")
But that just results in an empty log; nothing gets dumped to it.
So, how can I obtain this info from the GATEWAY object, if possible? Here's the production config, the format of which is identical to what's used in staging env.
::GATEWAY = ActiveMerchant::Billing::PaypalGateway(
:login => 'me_api1.blah...',
:password => 'string...',
:signature => 'longer string...'
)
Thanks.
Needed to add the additional line as follows:
ActiveMerchant::Billing::PaypalGateway.wiredump_device.sync = true
Within the same config block in the environment
Somewhere in the class library you're using there should be a function to output this for you (if it's a well built library, that is.)
Even without that, though, you should be able to look in that PaypalGateway function to see where/how it's setting the endpoint. It's either hard-coding the value or it'll be setting different endpoints based on some sandbox option you have configured somewhere else in the class.
It's hard to tell you more than that without getting a look a the actual class library you're using, but I can concur that it must be either incorrect credentials or an incorrect endpoint. I've never once seen that security header error when it wasn't simply invalid credentials, which means either your values are incorrect or you're hitting the wrong endpoint.
If you want to post that whole function (or maybe even the whole library as the endpoint could be getting set from some other function) I can take a look and find the problem for you.