Lookback API: How long is a defect in a particular state? - rally

We have a state in our defects called "Need More Information". I would like to create a graph over time of how many defects are in that state at any particular period of time.
I think I can get the info to do that with the Lookback API with the following query:
my $find = {
State => 'Need More Information',
'_PreviousValues.State' => {'$ne' => 'Need More Information'},
_TypeHierarchy => -51006, # defect
_ValidFrom => {
'$gte' => '2012-09-01TZ',
'$lt' => '2012-10-23TZ',
}
I thought that would give me back a list of all defect snapshots where the defect was transitioning into "Need More Information" state, but it does not (seems to list everything that was ever in "Need More Information" state.
Technically what I need is a query that lists snapshots of any defects transitioning either TO OR FROM the "Need More Information" state, but since this simpler one did not seem to work as I expected, I thought I would ask first why the query above did not work the way I expected.
The "Generated Query" in the header that comes back is:
'fields' => 1,
'skip' => 0,
'limit' => 100,
'find' => {
'_TypeHierarchy' => -51006,
'_ValidFrom' => {
'$gte' => '2012-09-01T00:00:00.000Z',
'$lt' => '2012-10-23T00:00:00.000Z'
},
'_PreviousValues.State' => {
'$in' => [
undef,
5792599066,
5792599067,
5792599065,
5792599070,
5792599071,
5792599068,
5792599073,
5792599072,
5792599075,
5792599077,
5792599076,
5792599078,
3631859989,
3631859988,
3631859987,
3631859986
]
},
'State' => {
'$in' => [
4384150044
]
}
}
};

I tried leveraging the $nin clause and had success with it. You might try adjusting your query to resemble something like this:
find: {
_Type: 'Defect',
State: 'Need More Information',
'_PreviousValues.State': {
$in: [
'Submitted', 'Open', 'Fixed', 'Closed'
]
},
etc...
}

Related

Oro PriceList business unit ownership

I have a requirement to configure ownership for priceList entities. To approach this I created a migration to add the required fields:
$this->extendExtension->addManyToOneRelation(
$schema,
$table,
'organization',
$organizationTable,
'name',
[
'extend' => [
'is_extend' => true,
'owner' => ExtendScope::OWNER_SYSTEM,
'without_default' => true,
]
]);
$this->extendExtension->addManyToOneRelation(
$schema,
$table,
'owner',
$businessTable,
'name',
[
'extend' => [
'is_extend' => true,
'owner' => ExtendScope::OWNER_SYSTEM,
'without_default' => true,
]
]
);
Then updated entity configuration information with:
$params = [
"owner_type" => "BUSINESS_UNIT",
"owner_field_name" => "owner",
"owner_column_name" => "owner_id",
"organization_field_name" => "organization",
"organization_column_name" => "organization_id"
];
foreach($params as $code => $value) {
$queries->addPostQuery(
new UpdateEntityConfigEntityValueQuery(
PriceList::class,
'ownership',
$code,
$value
)
);
}
Migration processed without issues but for the data grid on the priceList index page error occurred.
An exception occurred while executing 'SELECT count(o0_.id) AS sclr_0 FROM oro_price_list o0_ WHERE o0_. = 1'
It looks like the data grid couldn't reach the organization name to handle the pagination query. Data grid unmodified grid from ORO 4.1 EE
Everything works fine if ownership configuration updated via SetOwnershipTypeQuery intead of UpdateEntityConfigEntityValueQuery
$queries->addQuery(
new SetOwnershipTypeQuery(
PriceList::class,
[
'owner_type' => 'BUSINESS_UNIT',
'owner_field_name' => 'owner',
'owner_column_name' => 'owner_id',
'organization_field_name' => 'organization',
'organization_column_name' => 'organization_id'
]
)
);

cakephp 3.8.13 add admad/cakephp-jwt-auth

This question is asked many times in the stack overflow but I tried every accepted solution.
I'm new to cake PHP and I was assigned to add JWT in our application. Previously the team used the default cake sessions. In order to integrate, I used admad/cakephp-jwt-auth. So In the AppController
public function initialize()
{
parent::initialize();
$this->loadComponent('RequestHandler');
$this->loadComponent('Flash');
$this->loadComponent('Recurring');
$this->loadComponent('Auth', [
'storage' => 'Memory',
'authenticate' => [
'Form' => [
'fields' => [
'username' => 'user_name',
'password' => 'password',
],
'contain' => ['Roles']
],
'ADmad/JwtAuth.Jwt' => [
'parameter' => 'token',
'userModel' => 'CbEmployees',
'fields' => [
'username' => 'id'
],
'queryDatasource' => true
]
],
'unauthorizedRedirect' => false,
'checkAuthIn' => 'Controller.initialize'
]);
}
I have to use CbEmployees which is our user model.
Then in my custom controller, I add my login function
public function login()
{
$user = $this->Auth->identify();
if (!$user) {
$data = "Invalid login details";
} else {
$tokenId = base64_encode(32);
$issuedAt = time();
$key = Security::salt();
$data = JWT::encode(
[
'alg' => 'HS256',
'id' => $user['id'],
'sub' => $user['id'],
'iat' => time(),
'exp' => time() + 86400,
],
$key
);
}
$this->ApiResponse([
"data" => $data
]);
}
Then I call this function using postman with body
{
"username": "developer",
"password": "dev2020"
}
I always get the response as Invalid login details. So the suggested solution is to check the password data type and length. The password is varchar(255). Another solution is to check the password in the entity. In the entity I have
protected function _setPassword($password)
{
if (strlen($password) > 0) {
return Security::hash($password, 'sha1', true);
// return (new DefaultPasswordHasher)->hash($password);
}
}
I specifically asked why the team is using Security::hash($password, 'sha1', true); due to migration from cake 2 to cake 3 they have to use the same.
Why I'm getting always Invalid login details? What I'm doing wrong here? I can log in the using the same credentials when I'm using the application.

Logstash sprintf formatting for elasticsearch output plugin not working

I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete:
{
"beat" => {
"hostname" => "ca86fed16953",
"name" => "ca86fed16953",
"version" => "6.5.1"
},
"#timestamp" => 2018-12-02T05:13:21.879Z,
"host" => {
"name" => "ca86fed16953"
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"env" => "DEV"
},
"source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log",
"#version" => "1",
"prospector" => {
"type" => "log"
},
"bgp_id" => "42313900",
"message" => "{<some message here>}",
"offset" => 1440990627,
"input" => {
"type" => "log"
},
"docker" => {
"container" => {
"id" => "logstash_DEV.log"
}
}
}
I am trying to index the files this based on filebeat's environment. Here is my config file:
input {
http { }
beats {
port => 5044
}
}
filter {
grok {
patterns_dir => ["/usr/share/logstash/pipeline/patterns"]
break_on_match => false
match => { "message" => ["%{RUBY_LOGGER}"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[fields][env]}-%{+yyyy.MM.dd}"
}
stdout { codec => rubydebug }
}
I would think the referenced event fields would have already been populated by the time it reaches the elasticsearch output plugin. However, on the kibana end, it doesnt not register the formatted index. Instead, its since like this:
What have I done wrong?
In Elasticsearch Output plugin docs:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template
Should you require support for other index names, or would like to
change the mappings in the template in general, a custom template can
be specified by setting template to the path of a template file.
Setting manage_template to false disables this feature. If you require
more control over template creation, (e.g. creating indices
dynamically based on field names) you should set manage_template to
false and use the REST API to apply your templates manually.
By default, elasticsearch requires you to specify a custom template if using different index names other than logstash-%{+YYYY.MM.dd}. To disable, we need to include the manage_template => false key.
So with this new set of info, the working config should be:
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[fields][env]}-%{+yyyy.MM.dd}"
manage_template => false
}
stdout { codec => rubydebug }
}

cakePHP3 Form Authentication fails on identify() and manual DefaultPasswordHasher check()

I copied code from examples to create a basic login screen based on the table individuals with email and password. My AppController has this:
$this->loadComponent('Auth', [
'authenticate' => [
'Form' => [
'fields' => ['username' => 'email', 'password' => 'password'],
'userModel' => 'Individuals']
],
'loginAction' => [
'controller' => 'Individuals',
'action' => 'login'
],
'loginRedirect' => [
'controller' => 'Associations',
'action' => 'login'
],
'logoutRedirect' => [
'controller' => 'Association',
'action' => 'login',
'home'
]
]);
Password resets are done via a token emailed to the user. The controller saves the unencrypted value and /src/Model/Entity/Individual.php has _setPassword that ensures the database has an encrypted value. Every save for the same password is different but that, I gather, is normal.
protected function _setPassword($password) {
if (strlen($password) > 0) {
return (new DefaultPasswordHasher)->hash($password);
}
}
My login function started with the standard stuff, but it always returns false
$user = $this->Auth->identify();
My login function now has this debug code that always gets a "no match"
debug($this->request->data);
$email = $this->request->data['email'];
$pwd = $this->request->data['password'];
$user = $this->Individuals->find()
->select(['Individuals.id', 'Individuals.email', 'Individuals.password'])
->where(['Individuals.email' => $email])
->first();
if ($user) {
if ((new DefaultPasswordHasher)->check($pwd, $user->password)) {
debug('match');
}
else{
debug('no match');
}
if ($user->password == (new DefaultPasswordHasher)->hash($pwd)) {
debug('match2');
}
else {
debug('no match2');
}
}
There's a lot more code in and around that and I'm pretty confident I've got it right. Let me know if you need more. I'm keen to crack this.
thanks in advance.

only strings in influxdb

i've this config file in logstash
input {
redis{
host => "localhost"
data_type => "list"
key => "vortex"
threads => 4
type => "testrecord"
codec => "plain"
}
}
filter {
kv {
add_field => {
"test1" => "yellow"
"test" => "ife"
"feild" => "pink"
}
}
}
output {
stdout { codec => rubydebug }
influxdb {
db => "toast"
host => "localhost"
measurement => "myseries"
allow_time_override => true
use_event_fields_for_data_points => true
exclude_fields => ["#version", "#timestamp", "sequence", "message", "type", "host"]
send_as_tags => ["bar", "feild", "test1", "test"]
}
}
and a list in redis with the following data:
foo=10207 bar=1 sensor2=1 sensor3=33.3 time=1489686662
everything works fine but every field in influx is defined as string regardless of values.
does anybody know how to get around this issue?
The mutate filter may be what you're looking for here.
filter {
mutate {
convert => {
"value" => "integer"
"average" => "float"
}
}
}
It means you need to know what your fields are before-hand, but it will convert them into the right data-type.