I am configuring the Power BI Analysis Services Connector but getting the below error
"the remote server returned an error: (400) Bad Request. [REQUESTID: e202c423-cd6a-4fd8-ae27-9b8b5231d433] [UTC TIMESTAMP: 01-03-2016 13:43:47] [CLUSTER: wabi-south-east-asia-redirect.analysis.windows.net]"
Did anyone has the solution of it.
I feel your pain. I also struggled with this for about a day. I then saw a small post on a support site that the Power Bi Analysis connector was actually replaced with a new piece of software called: Power BI gateway for enterprise deployments
you can find it here:
https://powerbi.microsoft.com/en-us/blog/announcing-preview-of-power-bi-gateway-for-enterprise-deployments/
It's a joke how easy it was to set up the connection after installing that.
Good Luck
Related
Recently I installed Rabbit MQ in Centos8 for my company. We also using Splunk Enterprise so we wants to integrate our Rabbit MQ to Splunk and we wants to see, search, check our logs which is coming from Rabbit MQ to in Splunk . How can I do that I don't know. I google it but I didn't get info about it. May anybody help to me for this goal ? Thank you
The scripts at https://github.com/jerrykuch/rabbitmq-splunk are somewhat old, but should still be functional. It can leverage the HTTP API to pull in the relevant data. That page also lists the recommended files to monitor
Always check Splunkbase when looking for ingesting data types - often there exist apps and add-ons that will do what you're looking for
Here are two related to RabbiMQ:
JMS Messaging Modular Input - https://splunkbase.splunk.com/app/1317/#/details
AMQP Messaging Modular Input - https://splunkbase.splunk.com/app/1812/#/details
i am trying to connect mainframe from MuleESB we have CICS regions but i am not sure how useful CICS regions to connect and do we need to connect MQ to intgrate with Mainframe. is thr any way with out connecting MQ can we connect mainframes
CICS itself is capable of being connected to using many different transports, including MQ and HTTP. Within those transports, CICS also supports many data formats, including SOAP for Web Services, JSON, binary, and so on.
It'll depend on your exact setup at your organisation as to which have been enabled, so you'll need to find out which transports are available for you to use and which data formats they're talking.
If you have IBM's WebSphere MQ on your mainframe, you will find it easy to communicate to your mainframe CICS transactions using the standard JMS component in Mule...we do this all the time using ActiveMQ, which is very familiar to any Mule developer. You will need a JMS "bridge" to connect Active MQ to WebSphere MQ - see ActiveMQ bridge connector to WebSphereMQ without using XML config.
Once you have connectivity, there are a lot of alternatives as to the various data formats and message payloads. As Ben Cox says, you have a bewildering array of choices, from raw application objects to XML, SOAP, JSON and so forth. Most of what you use at this level will probably depend on whether you're connecting to existing applications, or building new software that you can control.
If you're comfortable extending Mule using it's Connector Factory APIs, you should be able to encapsulate most of the information in a way that's Mule-friendly. We do this today in several large systems and it works quite well overall.
I am trying to host WCF services on an on-premise Service Bus but I understand that relaying is not yet supported. Is there a scheduled release date for this functionality?
Thanks for your help.
Relay feature will not be supported in Service Bus Server in near future.
The customer segment for this feature is very less in Server space - closer to Zero as Relaying is primarily to get data/services out of Enterprise (firewall) with out having to do heavyweight solutions like VPN.
HTH!
Sree
According to Clemens Vasters, Architect, Azure Messaging: "If it's up to me then relay is also going to go into that new [Azure Stack] server". But it probably won't happen soon.
Reference: A May 12 Integrate 2016 talk titled Service Bus – Roadmap, What’s next?. This statement was made at about 46:19. You can listen to announcements related to Service Bus for Windows Server at about 40:35 and you can read some paraphrased transcripts here.
Could someone please confirm whether azure diagnostics is possible for WCF hosted in azure?
I followed this guide: http://blogs.msdn.com/b/sajid/archive/2010/04/30/basic-azure-diagnostics.aspx
After doing Trace.WriteLine("Information","testing").
I was expecting a WADTable on azure storage, but not appearing.
Thanks
What was your transfer period, filter level, and how long did you wait to see it appear? Do you have the AzureDiagnostics trace listener in your config file (or added programmatically). Nothing willl appear if you are not using the listener.
Diagnostics by default are aggregated locally and will never appear in your storage account unless you actually initiate a transfer with the correct filter level (Undefined will do it all). There are billing reasons why that is the case, btw (it costs you money in Tx and storage).
This blog post is about 18 months old and there have been some breaking changes for Windows Azure Diagnostics since then from SDK perspective. Please refer to the latest SDK documentation or these blog posts:
http://convective.wordpress.com/2010/12/01/configuration-changes-to-windows-azure-diagnostics-in-azure-sdk-v1-3/
http://blog.bareweb.eu/2011/03/implementing-azure-diagnostics-with-sdk-v1-4/
I'm trying to publish some HL7 schemas (with quite a few ) as wcf services using the "WCF Service Publishing Wizard". The wizard seemingly runs and completes just fine, creating a service that exposes the schemas I want. But when I try to browse the newly created service, I get "Server Application Unavailable"... I looked in the eventviewer and noticed the error message: "System.OutOfMemoryException". I tested once more while having a look in "Task Manager", and i noticed that the aspnet_wp.exe was consuming more than 1 GB of RAM before it was terminated (application pool probably recycled after reaching maximum memory consumption allowed).
I was quite puzzled as to why this happened, so I decided to publish the same schema as a ASMX webservice using the "Web Services Publishing Wizard", to see if it would make any difference. After running the wizard I tried to browse the service, and it worked out just fine with no problmes whatsoever. I looked at the generated WSDL definition, which was huge, and all the referenced schemas was added as inline schemas, and not as include or import.
This left me to believe that it could be an issue with the generation of the WSDL, having so many includes in the published schema, but im not at all sure yet as to if this could be the case...
Is there anyone who have experienced similar problems trying to publish schemas as wcf services?
I welcome all suggestions that could lead me in the right direction in this issue.
Thanks.
-M.Papas
This problem is definitely a memory issue with the WSDL generation tool. Publishing complex or even semi-complex schemas as Web Services or WCF Services usually ends in out of memory exceptions. I've ran into this a few times doing a SAP iDoc demo and its just that the schema is too complex for the WSDL tool. Hope that helps.