I have played lodash in plunker and found a strange issue: the array methods works but not string or math methods.
These work:
_.first([1, 2, 3]);
_.isObject(v);
But these do not work:
_.min([4, 2, 8, 6]);
_.endsWith('abc', 'c');
Please help me figure it out. Here is the plunker demo. It is forked from a lodash demo.
You're using an old version of lodash. Your _.min call is working fine (try just running var v = _.min([4, 2, 8, 6]);). _.endsWith isn't working because it doesn't exist in the version of lodash that you're using.
Related
As method withConsecutive will be deleted in PHPUnit 10 (in 9.6 it's deprecated) I need to replace all of occurrences of this method to new code.
Try to find some solutions and didn't find any of reasonable solution.
For example, I have a code
$this->personServiceMock->expects($this->exactly(2))
->method('prepare')
->withConsecutive(
[$personFirst, $employeeFirst],
[$personSecond, $employeeSecond],
)
->willReturnOnConsecutiveCalls($personDTO, $personSecondDTO);
To which code should I replace withConsecutive ?
P.S. Documentation on official site still shows how use withConsecutive
I've just upgraded to PHPUnit 10 and faced the same issue. Here's the solution I came to:
$this->personServiceMock
->method('prepare')
->willReturnCallback(fn($person, $employee) =>
match([$person, $employee]) {
[$personFirst, $employeeFirst] => $personDTO,
[$personSecond, $employeeSecond] => $personSecondDTO
}
);
If the mocked method is passed something other than what's expected in the match block, PHP will throw a UnhandledMatchError.
Looks like there are not exists solution from the box.
So, what I found - several solutions
Use your own trait which implements method withConsecutive
Use prophecy or mockery for mocking.
I'm facing a strange problem maybe related with some cache that I cannot find.
I have the following Models:
class Incubadores(models.Model):
incubador = models.CharField(max_length=10, primary_key=True)
posicion = models.CharField(max_length=10)
class Tareas(TimeStampedModel):
priority = models.CharField(max_length=20, choices=PRIORITIES, default='normal')
incubador = models.ForeignKey(Incubadores, on_delete=models.CASCADE, null=True, db_column='incubador')
info = JSONField(null=True)
datos = JSONField(null=True)
class Meta:
ordering = ('priority','modified','created')
I previously didn't have the argument db_column, so the Postgres column for that field was incubador_id
I used the argument db_column to change the name of the column, and then I run python manage.py makemgrations and python manage.py migrate, but I'm still getting the column as incubadores_id whenever I perform a query such as:
>>> tareas = Tareas.objects.all().values()
>>> print(tareas)
<QuerySet [{'info': None, 'modified': datetime.datetime(2019, 11, 1, 15, 24, 58, 743803, tzinfo=<UTC>), 'created': datetime.datetime(2019, 11, 1, 15, 24, 58, 743803, tzinfo=<UTC>), 'datos': None, 'priority': 'normal', 'incubador_id': 'I1.1', 'id': 24}, {'info': None, 'modified': datetime.datetime(2019, 11, 1, 15, 25, 25, 49950, tzinfo=<UTC>), 'created': datetime.datetime(2019, 11, 1, 15, 25, 25, 49950, tzinfo=<UTC>), 'datos': None, 'priority': 'normal', 'incubador_id': 'I1.1', 'id': 25}]>
I need to modify this column name because I'm having other issues with Serializers. So the change is necessary.
If I perform the same query in other Models where I've also changed the name of the default field. The problem is exactly the same.
It happens both on the shell and on the code.
I've tried with different queries, to make sure it's not related to Django lazy query system, but the problem is the same. I've also tried executing django.db.connection.close().
If I do a direct SQL query to PostgreSQL, it cannot find incubador_id, but only incubador, which is correct.
Anyone has any idea of what can be happening? I've already been 2 days with this problem and I cannot find a reason :( It's a very basic operation.
Thanks!
This answer will explain why this is happening.
Django's built-in serializers don't have this issue, but probably won't yield exactly what you're looking for:
>>> from django.core import serializers
>>> serializers.serialize("json", Tareas.objects.all())
'[{"model": "inc.tareas", "pk": 1, "fields": {"priority": "normal", "incubador": "test-i"}}]'
You could use the fields attribute here, which seems like it would give you what you're looking for.
You don't specify what your "other issues with Serializers" are, but my suggestion would be to write custom serialization code. Relying on something like .values() or even serializers.serialize() is a bit too implicit for me; writing explicit serialization code makes it less likely you'll accidentally break a contract with a consumer of your serialized data if this model changes.
Note: Please try to make the example you provide minimal and reproducible. I removed some fields to make this work with stock Django, which is why the serialized value is missing fields; the _id issue was still present without the third-party apps you're using, and was resolved with serializers. This also isn't specific to PG; it happens in sqlite as well.
My goal in writing a function is to allow callers to pass in the same condition arguments they would to a where call in ActiveRecord, and I want the corresponding Rails-generated SQL.
Example
If my function receives a hash like this as an argument
role: 'Admin', id: [4, 8, 15]
I would expect to generate this string
"users"."role" = 'Admin' AND "users"."id" IN (4, 8, 15)
Possible Solutions
I get the closest with to_sql.
pry(main)> User.where(role: 'Admin', id: [4, 8, 15])
=> "SELECT \"users\".* FROM \"users\" WHERE \"users\".\"role\" = 'Admin' AND \"users\".\"id\" IN (4, 8, 15)"
It returns almost exactly what I want; however, I would be more comfortable not stripping away the SELECT ... WHERE myself in case something changes in the way the SQL is generated. I realize the WHERE should always be there to split on, but I'd prefer an even less brittle approach.
My next approach was using Arel's where_sql function.
pry(main)> User.where(role: 'Admin', id: [4, 8, 15]).arel.where_sql
=> "WHERE \"users\".\"role\" = $1 AND \"users\".\"id\" IN (4, 8, 15)"
It gets rid of the SELECT but leaves the WHERE. I would prefer it to the above if it had already injected the sanitized role, but that renders it quite a bit less desirable.
I've also considered generating the SQL myself, but I would prefer to avoid that.
Do any of you know if there's some method right under my nose I just haven't found yet? Or is there a better way of doing this altogether?
Ruby 2.3.7
Rails 5.1.4
I too would like to know how to get the conditions without the leading WHERE. I see in https://coderwall.com/p/lsdnsw/chain-rails-scopes-with-or that they used string manipulation to get rid of the WHERE, which seems messy but maybe the only solution currently. :/
scope.arel.where_sql.gsub(/\AWHERE /i, "")
I am working on my own directory for my purchases of cryptocurrencies.
I am getting prices of BTC, ETH, and LTC via API, then I created a component for each of my punched coin, so then I want to calculate current price (ownedCoins * currentPrice).
So in my $root I have { eth: 324.233, btc: 2211.43, ltc: 41.341 }
Here is where I want to calculate it:
self.eur = response.data.sum[0].quantity * this.$root.ltc;
But I want to make this dynamic, so what I want to do is to create a dynamic variable. Something like that: self.eur = response.data.sum[0].quantity * this.$root.{this.coinName};
How would I do that?
I would read the State Management part of the VueJS docs then checkout the Vuex docs. Once your data store get even mildly more complex your method of managing it with your sample code will become overwhelming.
Your question doesn't have anything to do with vue, but just plain javascript. To access object variables in javascript you have 2 ways, using the dot notation or bracket notation (I call it array notation):
const car = { wheels: 4, seats: 5, horsepower: 145 };
console.log(car.wheels);
console.log(car['wheels']); //same result
So
this.$root[this.coinName];
will give you the result you are looking for.
I'm analyzing data with Apache pig and could not find a way to expand an array if items.
Here is the schema I'm working with, and an example of the desired output:
(col1:int, col2:int, items:{ARRAY_ELEM:(name:chararray, total:int)})
input = (1, 1, {("bird", 5), ("bear", 12), ("wolf", 10)})
output = (1, 1, "bird", 5, "bear", 12, "wolf", 10)
Is there any way to do this transformation?
Thanks for your help!
If you need to do this transformation right now the easiest way is probably to do a UDF in Python or Java (I am not aware of any built-in solution).
However, most of the time it is better to keep the same number of columns in each record (e.g. keep your array as a bag or tuple and don't "flatten" it in one record).
Check out this Python UDF I wrote for doing that (hopefully soon to be part of Python PiggyBank). You can use that on your bags and then flatten them to get the results you want, for example, assuming your data set is called blah, you should be able to register my function and then do something like:
flattened_blah = FOREACH blah GENERATE item1, item2, FLATTEN(bagToTuple(item3)) AS item4, item5, item6, item7, item8, item9
Also, I'm pretty sure LinkedIn's DataFu has a way of doing this. If you're using Pig and not yet using that, you probably should take a look at it.