Kafka Streams Code implemented as a library and function called - api

So, I have created a Kafka Application that basically uses filter function. Now, I have created the jar file of this application.
I want to import this application as a library in some other program and call the method to start the filtering process. Then I also want to stop this filtering using some command or API call. How can I do this??
I have provide the topology builder function below for which I want to convert into a library and call from other application.
public static Topology builderFunc(String id) {
final Serializer<JsonNode> jsonNodeSerializer = new JsonSerializer();
final Deserializer<JsonNode> jsonNodeDeserializer = new JsonDeserializer();
final Serde<JsonNode> jsonNodeSerde = Serdes.serdeFrom(jsonNodeSerializer, jsonNodeDeserializer);
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, JsonNode> inputStream =
builder.stream("input-records", Consumed.with(Serdes.String(), jsonNodeSerde));
inputStream.filter((key, value) -> key.equals(id))
.to("output-records", Produced.with(Serdes.String(), jsonNodeSerde));
final Topology topology = builder.build();
return topology;
}

Related

Generate Link To Spring Data Rest Controller

I created a REST API with Spring Data Rest that forks fine. It must be possible to clone Projects via the API, so I added a custom #RestController to implement that via POST /projects/{id}/clone.
#RestController
#RequestMapping(value = "/projects", produces = "application/hal+json")
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class ProjectCloneController {
private final ProjectRepo projectRepo;
#PostMapping("/{id}/clone")
public EntityModel<Project> clone(#PathVariable String id) {
Optional<Project> origOpt = projectRepo.findById(id);
Project original = origOpt.get();
Project clone = createClone(original);
EntityModel<Project> result = EntityModel.of(clone);
result.add(linkTo(ProjectRepo.class).slash(clone.getId()).withSelfRel());
return result;
}
I am stuck at the point where I need to add a Link to the EntityModel that points to an endpoint provided by Spring Data Rest. It will need to support a different base path, and act correctly to X headers as well.
Unfortunately, the line above (linkTo and slash) just generates http://localhost:8080/636f4aaac9143f1da03bac0e which misses the name of the resource.
Check org.springframework.data.rest.webmvc.support.RepositoryEntityLinks.linkFor

ChronicleQueue - how to read custom object from tailer

I am very new to ChronicleQueue and I am unable to find a straight forward example of how can I read back my custom object from a tailer.
public class MyData extends AbstractMarshallable
I have my class containing some strings and numbers, I am able to write to queue using appender, but there is no straight forward api to call. how can i get a object of MyData from tailer.readDocument api?
Try with below code:
final DocumentContext context = queue.createTailer().readingDocument();
final MyData container = new MyData();
if (context.isPresent()) {
context.wire().getValueIn().marshallable(container);
}
this assumes that appending is performed in the following manner:
try (DocumentContext ctx = appender.writingDocument()) {
ctx.wire().getValueOut().marshallable(myData);
}

Deploying SSRS RDL files from VB.Net - Issue with shared datasources

I am currently developing a utility to help automate our report deployment process. Multiple files, in multiple folders, to multiple servers.
I am using the reportservice2010.asmx web service, and I am deploying my files to the server - so most of the way there.
My issue is that I have shared data sets and shared data sources, which are deployed to individual folders, separate to the report folders. When the deployment occurs the web service looks locally for the data source rather than in the data source folder, giving an error like:
The dataset ‘CostReduction’ refers to the shared data source ‘CostReduction’, which is not
published on the report server. The shared data source ‘CostReduction’ must be published
before this report can run.
The data source/set has been deployed and the report functions correctly but I need to suppress these error messages as they may be hiding other actual errors.
I can hard code a lookup that checks if the data source/set exists and manually filter them via that, but it seems very in-efficient. Is there any way I can tell the web service where to look for these files or another approach that other people have used?
I'm not looking at changing the reports so the data source is read from
/DataSources/DataSourceName
as there are lots of reports and that's not how our existing projects are configured.
Many thanks in advance.
I realize you are using VB, but perhaps this will give you a clue if you convert it from C# to VB, using one of the translators on the web.
Hopefully this will give you a lead in the right direction.
When All the reports in a particular folder, referred to here as the 'parent folder', all use the same Shared Data source, I use this to set all the reports to the same shared Data Source (in this case "/DataSources/Shared_New")
using GetPropertiesSample.ReportService2010;
using System.Diagnostics;
using System.Collections.Generic; //<== required for LISTS
using System.Reflection;
namespace GetPropertiesSample
{
class Program
{
static void Main(string[] args)
{
GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource("0_Contacts"); //<=== This is the parent folder
}
private static void GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource(string sParentFolder)
{
// Create a Web service proxy object and set credentials
ReportingService2010 rs = new ReportingService2010();
rs.Credentials = System.Net.CredentialCache.DefaultCredentials;
CatalogItem[] reportList = rs.ListChildren(#"/" + sParentFolder, true);
int iCounter = 0;
foreach (CatalogItem item in reportList)
{
iCounter += 1;
Debug.Print(iCounter.ToString() + "]#########################################");
if (item.TypeName == "Report")
{
Debug.Print("Report: " + item.Name);
ResetTheDataSource_for_a_Report(item.Path, "/DataSources/Shared_New"); //<=== This is the DataSource that I want them to use
}
}
}
private static void ResetTheDataSource_for_a_Report(string sPathAndFileNameOfTheReport, string sPathAndFileNameForDataSource)
{
//from: http://stackoverflow.com/questions/13144604/ssrs-reportingservice2010-change-embedded-datasource-to-shared-datasource
ReportingService2010 rs = new ReportingService2010();
rs.Credentials = System.Net.CredentialCache.DefaultCredentials;
string reportPathAndName = sPathAndFileNameOfTheReport;
//example of sPathAndFileNameOfTheReport "/0_Contacts/207_Practices_County_CareManager_Role_ContactInfo";
List<ReportService2010.ItemReference> itemRefs = new List<ReportService2010.ItemReference>();
ReportService2010.DataSource[] itemDataSources = rs.GetItemDataSources(reportPathAndName);
foreach (ReportService2010.DataSource itemDataSource in itemDataSources)
{
ReportService2010.ItemReference itemRef = new ReportService2010.ItemReference();
itemRef.Name = itemDataSource.Name;
//example of DataSource i.e. 'itemRef.Reference': "/DataSources/SharedDataSource_DB2_CRM";
itemRef.Reference = sPathAndFileNameForDataSource;
itemRefs.Add(itemRef);
}
rs.SetItemReferences(reportPathAndName, itemRefs.ToArray());
}
}
To Call it I use this in the 'Main' Method:
GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource("0_Contacts");
In this case "0_Contacts" is the parent folder, itself located in the root directory, that contains all the reports for which I want to reset their DataSources to the new Shared DataSource. Then that Method calls the other method "ResetTheDataSource_for_a_Report" which actually sets the DataSource for the report.

Creating flow or model programmatically

I want to create a flow or model dynamically without using mule-config.xml for tcp with remote machines.
It should be something like this:
MuleContext context = new DefaultMuleContextFactory().createMuleContext();
MuleRegistry registry = context.getRegistry();
EndpointBuilder testEndpointBuilder = new EndpointURIEndpointBuilder("vm://testFlow.in",
context);
testEndpointBuilder.setExchangePattern(MessageExchangePattern.REQUEST_RESPONSE);
registry.registerEndpointBuilder("testFlow.in", testEndpointBuilder);
InboundEndpoint vmInboundEndpoint = testEndpointBuilder.buildInboundEndpoint();
registry.registerEndpoint(vmInboundEndpoint);
StringAppendTransformer stringAppendTransformer = new StringAppendTransformer(" world");
stringAppendTransformer.setMuleContext(context);
Flow testFlow = new Flow("testFlow", context);
testFlow.setMessageSource(vmInboundEndpoint);
testFlow.setMessageProcessors(Arrays.asList((MessageProcessor) stringAppendTransformer));
registry.registerFlowConstruct(testFlow);
context.start();
MuleClient muleClient = new MuleClient(context);
MuleMessage response = muleClient.send("vm://testFlow.in", "hello", null);
Validate.isTrue(response.getPayloadAsString().equals("hello world"));
muleClient.dispose();
context.stop();
Not sure if I understand your problem, but if you need a tcp outbound endpoint in your flow, you just create it similarly like the inbound vm endpoint in the example, but you then add it to a certain point in the flow in a list with all the processors with setMessageProcessors, like in the example where stringAppendTransformer is wrapped inside a list and added to the flow.
The code to create your tcp outbound would be something like this:
String address = "tcp://localhost:1234";
EndpointURIEndpointBuilder builder = new
EndpointURIEndpointBuilder(new URIBuilder(address), context);
builder.setExchangePattern(MessageExchangePattern.REQUEST_RESPONSE);
registry.registerEndpointBuilder("testFlow.out", builder);
OutboundEndpoint tcpOutboundEndpoint = builder.buildOutboundEndpoint();
registry.registerEndpoint(tcpOutboundEndpoint);
UPDATE regarding your new comment:
using a Java component:
//object factory for your Java class
PrototypeObjectFactory objectFactory = new PrototypeObjectFactory(MyClass.class);
objectFactory.initialise();
//the actual component
DefaultJavaComponent component = new DefaultJavaComponent(objectFactory);
//entry point resolver to determine the called method
EntryPointResolver resolver = new ExplicitMethodEntryPointResolver();
((ExplicitMethodEntryPointResolver)resolver).addMethod("myMethod");
component.setEntryPointResolvers(Arrays.asList(resolver));
Then add the component in the list like you add all the other processors

RavenDB IsOperationAllowedOnDocument not supported in Embedded Mode

RavenDB throws InvalidOperationException when IsOperationAllowedOnDocument is called using embedded mode.
I can see in the IsOperationAllowedOnDocument implementation a clause checking for calls in embedded mode.
namespace Raven.Client.Authorization
{
public static class AuthorizationClientExtensions
{
public static OperationAllowedResult[] IsOperationAllowedOnDocument(this ISyncAdvancedSessionOperation session, string userId, string operation, params string[] documentIds)
{
var serverClient = session.DatabaseCommands as ServerClient;
if (serverClient == null)
throw new InvalidOperationException("Cannot get whatever operation is allowed on document in embedded mode.");
Is there a workaround for this other than not using embedded mode?
Thanks for your time.
I encountered the same situation while writing some unit tests. The solution James provided worked; however, it resulted in having one code path for the unit test and another path for the production code, which defeated the purpose of the unit test. We were able to create a second document store and connect it to the first document store which allowed us to then access the authorization extension methods successfully. While this solution would probably not be good for production code (because creating Document Stores is expensive) it works nicely for unit tests. Here is a code sample:
using (var documentStore = new EmbeddableDocumentStore
{ RunInMemory = true,
UseEmbeddedHttpServer = true,
Configuration = {Port = EmbeddedModePort} })
{
documentStore.Initialize();
var url = documentStore.Configuration.ServerUrl;
using (var docStoreHttp = new DocumentStore {Url = url})
{
docStoreHttp.Initialize();
using (var session = docStoreHttp.OpenSession())
{
// now you can run code like:
// session.GetAuthorizationFor(),
// session.SetAuthorizationFor(),
// session.Advanced.IsOperationAllowedOnDocument(),
// etc...
}
}
}
There are couple of other items that should be mentioned:
The first document store needs to be run with the UseEmbeddedHttpServer set to true so that the second one can access it.
I created a constant for the Port so it would be used consistently and ensure use of a non reserved port.
I encountered this as well. Looking at the source, there's no way to do that operation as written. Not sure if there's some intrinsic reason why since I could easily replicate the functionality in my app by making a http request directly for the same info:
HttpClient http = new HttpClient();
http.BaseAddress = new Uri("http://localhost:8080");
var url = new StringBuilder("/authorization/IsAllowed/")
.Append(Uri.EscapeUriString(userid))
.Append("?operation=")
.Append(Uri.EscapeUriString(operation)
.Append("&id=").Append(Uri.EscapeUriString(entityid));
http.GetStringAsync(url.ToString()).ContinueWith((response) =>
{
var results = _session.Advanced.DocumentStore.Conventions.CreateSerializer()
.Deserialize<OperationAllowedResult[]>(
new RavenJTokenReader(RavenJToken.Parse(response.Result)));
}).Wait();