Is there a way to create an "Alias" or a "soft-link" of a subject in schema registry?
I'm asking this is because for certain kafka topics, they share a common schema
e.g.
topic names:
{env}.{level}.{schema_name}.{producer_name}.{realtime|batch}
schema subject:
{env}.{level}.{schema_name}.{version}
When I set up confluent s3 sink connector, the connector avro converter follows topicNameStrategy to derive schema subject from topic name so the schema name is derived to be {topicName}-value
Is there a way for me to create alias {topicName}-value which actually points to {env}.{level}.{schema_name}.{version} in schema registry? I'm trying to avoid duplicating schemas in schema registry or any significant changes in current kafka topics.
Thanks.
When I set up confluent s3 sink connector, the connector avro converter follows topicNameStrategy
You can set value.converter.value.subject.name.strategy to change that.
But that means you need to write your own class to get custom one not offered by Confluent. Sounds like your producer's are already using the same class to register those formats in the Registry rather than also the default {topic}-value, anyway.
The only way to "alias" is to explicitly make the HTTP requests to PUT a new version of a new subject after GET request against the "original". You aren't duplicating anything more than metadata since the schema will remain the same. The main downside is that there is no "linking" between the "aliases" since such a concept doesn't really exist, and therefore, you need to remember to change all subjects.
Personally, I used the pattern of hard-coding env=DEV/level=FOO/ in the topics.dir setting, then the topic names were underneath that. If you have dynamic values, then put those fields in the record itself, and use S3 Connect's FieldPartitioner to accomplish the same thing
Related
Speaking about the latest version (currently 2.3). Seems like old-way a little bit useless now.
If it is possible to create table(s) manually, here comes another question: how to map model POJO's fields and column names so I can fill in cache using DataStreamers. (#QuerySqlField.name, isn't it?)
If you are going to populate cache with data, using DataStreamer, then you should create it using Java API, or by configuring it in Ignite XML configuration.
Tables can be configured, using indexedTypes or queryEntities properties in CacheConfiguration. Take a look at the documentation: https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations
In order to keep the blob storage separated per user in a multiuser application, i was trying to create separate containers per user within the same storage account. I know it can hit a 500TB limit at some point, but thats for later.
I was thinking of using:
ClaimsPrincipal.Current.FindFirst(ClaimTypes.NameIdentifier).Value
as the name of the container but looks like its not allowed and violating the container naming conventions. Since i am using it to differentiate user specific data in sql tables i was thinking if using the same. But doesnt seem to work.
Any recommendations? Whats the cleanest way?
Are you saying you don't know the naming conventions? If not, then, as
Emily suggested, you should update your question to show the example names that you're trying to use. However, if you don't know the naming conventions, please refer to the naming rules section in How to use Blob Storage.
I have a CKAN datastore with a column named "recvTime" of type timestamp (i.e. using "timestamp" as type at datastore_create time, as shown in this link). Example value for this column is "2014-06-12T16:08:39.542000".
I have a large numbers of records in the datastore (thousands) and I would like to delete the rows before a given date in "recvTime". My first thought was doing it using the REST API with the datastore_delete operation using a range filter, but it is not possible as described in the following Q&A.
Is there any other way of solving the issue, please?
Given that I have access to the host where CKAN server is running, I wonder if this could be achieved executing a regular SQL sentence on the Postgresql engine where the datastore is persisted. However, I haven't found information about manipulating the CKAN underlying datamodel in the CKAN documentation, so don't know if this a good idea or if it is risky...
Any workaround or information pointer is highly welcome. Thanks!
You could definitely do this directly on the underlying database if you were willing to dig in there (the structure is pretty simple with tables named after the corresponding resource id). You could even turn this into an API of your own using an extension (though you'd want to be careful about permissions).
You might also be interested in the new support (master only atm) for extending the DataStore API via a plugin in an extension - see https://github.com/ckan/ckan/pull/1725
On my BizTalk server I use several different credentials to connect to internal and external systems. There is an upcoming task to change the passwords for a lot of systems and I'm searching for a solution to simplify this task on my BizTalk server.
Is there a way that I could adjust the File/FTP adapters to extract the information from an XML file so that I can change it only in the XML file and everything will be updated or is there an alternative that I could use such as PowerShell?
Did someone else had this task as well?
I rather don't want to create a custom adapter but if there is no alternative I will go for that one. Using dynamic credentials for the send port can be solved with Orchestration but I need this as well for the receive port.
You can export the bindings of all your applications. All the passwords for the FTP and File Adapter will be masked out with a series off * (asterisks).
You could then edit your binding down to just those ports you want to update, replace the masked out passwords with the correct passwords, and when you want the passwords changed, import them.
Unfortunately unless you have already prepared tokenised binding files the above is a manual effort.
I was going to recommend that you take a look at Enterprise Single Sign-On, but on second thoughts, I think you probably just need to 'bite the bullet' and make the change in the various Adapters.
ESSO would be beneficial if you have a single Adapter with multiple endpoints/credentials, but I infer from your question that isn't the case (i.e. you're not just using a single adapter). I also don't think re-writing the adapters to include functionality to read usernames/passwords from file is feasible IMHO - just changing the passwords would be much faster, by an order of weeks or months ;-)
One option that is available to you however, depending on which direction the adapter is being used: if you need to change credentials on Send Adapters, you should consider setting usernames/passwords at runtime via the various Adapter Property Schemas (see http://msdn.microsoft.com/en-us/library/aa560564.aspx for the FTP Adapter Properties for example). You could then easily create an encoding Send Pipeline Component that reads an Xml file containing credentials and updates the message context properties accordingly, the message would then be send with the appropriate credentials to the required endpoint.
There is also the option of using ESSO as your (encrypted) config store instead of Xml files / database etc. Richard Seroter has a really good post on this from way back in 2007 (its still perfectly valid tho.)
Because schema, object class definitions, etc. are DirContexts in JNDI, the API allows changing them at runtime (adding new attributes, removing them, etc.). Is this supported, or does this depend on repository implementation? In particular, is this supported by LDAP repositories? If it depends on implementation, I am interested in ApacheDS and OpenDJ.
The schema might be mutable: whether or not an LDAP client can change the schema depends on whether the directory administrators allow changes to subschema entries. In some servers, but not all, the location of the schema is listed in the root DSE.
Generally, the schema must be readable since LDAP clients require access to matching rules, ordering rules, and attribute syntaxes to perform comparisons of attribute values (language-native comparisons should be avoided and matching rules should be preferred), but whether the schema is mutable depends on whether the administrators allow it for clients.
see also
LDAP: The Root DSE for more information about the root DSE
Some servers, like OpenDJ, Sun Directory Server..., allows you to dynamically modify the server's schema (provided you have proper permissions), but it is highly recommended that you extend the schema and do not make incompatible changes (such as removing objectClass definition that are currently used by entries).