I tried to run org.apache.ignite.examples.datastructures.IgniteSetExample on cluster(2 nodes) after adding some my debug code. Some of its source code like following:
CollectionConfiguration setCfg = new CollectionConfiguration();
setCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
setCfg.setCacheMode(CacheMode.PARTITIONED);
// Initialize new set.
IgniteSet<String> set = ignite.set(setName, setCfg);
System.out.println("Set size before initializing: " + set.size()); //added by myslef
// Initialize set items.
for (int i = 0; i < 10; i++){
set.add(Integer.toString(i));
System.out.println("Set: " + Arrays.toString(set.toArray())); //added by myslef
}
System.out.println("Set size after initializing: " + set.size());
In my opinion, the size of ignite set should be 10 after adding data but I got a number which is great than 10 and typically 15. I found that there was some reduplicate numbers been added to the set. The log is here:
[19:53:16] Topology snapshot [ver=29, servers=2, clients=0, CPUs=8, heap=3.4GB]
Sep 21, 2017 7:53:16 PM org.apache.ignite.logger.java.JavaLogger info
Info: Topology snapshot [ver=29, servers=2, clients=0, CPUs=8, heap=3.4GB]
>>> Ignite set example started.
Set size before initializing: 0
Set: [0]
Set: [1, 1, 0]
Set: [2, 1, 2, 1, 0]
Set: [2, 1, 3, 2, 1, 0, 3]
Set: [2, 1, 3, 2, 1, 0, 4, 3]
Set: [2, 1, 3, 2, 1, 0, 5, 4, 3]
Set: [2, 1, 3, 2, 1, 0, 6, 5, 4, 3]
Set: [7, 2, 1, 3, 7, 2, 1, 0, 6, 5, 4, 3]
Set: [7, 2, 1, 3, 8, 7, 2, 1, 0, 6, 5, 4, 3]
Set: [7, 2, 1, 3, 9, 8, 7, 2, 1, 0, 6, 5, 4, 3]
Set size after initializing: 14
Sep 21, 2017 7:53:16 PM org.apache.ignite.logger.java.JavaLogger info
Info: Class locally deployed: class org.apache.ignite.examples.datastructures.IgniteSetExample$SetClosure
Sep 21, 2017 7:53:16 PM org.apache.ignite.logger.java.JavaLogger info
Info: Class locally deployed: class org.apache.ignite.configuration.CollectionConfiguration
Sep 21, 2017 7:53:16 PM org.apache.ignite.logger.java.JavaLogger info
Info: Class locally deployed: class org.apache.ignite.cache.CacheAtomicityMode
Sep 21, 2017 7:53:16 PM org.apache.ignite.logger.java.JavaLogger info
Info: Class locally deployed: class org.apache.ignite.cache.CacheMode
Set item has been added: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_0
Set item has been added: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_1
Set item has been added: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_2
Set item has been added: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_3
Set item has been added: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_4
Set size after writing [expected=20, actual=30]
Iterate over set.
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_1
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_3
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_3
Set item: 7
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_4
Set item: 2
Set item: 1
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_1
Set item: 3
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_2
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_3
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_4
Set item: 2
Set item: 1
Set item: 0
Set item: 6
Set item: 5
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_0
Set item: 4
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_1
Set item: 3
Set item: 7aa983e1-c358-4876-b58f-4f3b7bfa65f3_2
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_1
Set item: 9
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_2
Set item: 8
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_3
Set item: 7
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_4
Set item: 292c99a6-137b-433c-97d9-40ce0f8c0abc_0
Set size before clearing: 30
Set size after clearing: 0
Set was removed: true
Expected exception - Set has been removed from cache: GridCacheSetImpl [cache=GridDhtAtomicCache [defRes=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$1#482d776b, near=null, super=GridDhtCacheAdapter [multiTxHolder=java.lang.ThreadLocal#186978a6, stopping=false, super=GridDistributedCacheAdapter [super=GridCacheAdapter [locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl#631e06ab, clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl#2a3591c5, aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl#34a75079, igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false, igfsDataCacheSize=null, asyncOpsSem=java.util.concurrent.Semaphore#346a361[Permits = 500], name=datastructures_ATOMIC_PARTITIONED_0#default-ds-group, size=0]]]], name=03bbdb45-72ce-45aa-b75f-00b7b6134dc6, id=d55a844ae51-baeb6ba4-cb04-4d72-b0d8-188f21bc5ac5, collocated=false, hdrPart=961, rmvd=true, binaryMarsh=true, compute=org.apache.ignite.internal.IgniteComputeImpl#4052274f]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Command protocol successfully stopped: TCP binary
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Stopped cache [cacheName=ignite-sys-cache]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Stopped cache [cacheName=datastructures_TRANSACTIONAL_PARTITIONED_0#default-ds-group, group=default-ds-group]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Stopped cache [cacheName=datastructures_ATOMIC_PARTITIONED_0#default-ds-group, group=default-ds-group]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Stopped cache [cacheName=ignite-sys-atomic-cache#default-ds-group, group=default-ds-group]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info: Removed undeployed class: GridDeployment [ts=1505994796165, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader#73d16e93, clsLdrId=355a844ae51-7aa983e1-c358-4876-b58f-4f3b7bfa65f3, userVer=0, loc=true, sampleClsName=org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=true, usage=0]
[19:53:17] Ignite node stopped OK [uptime=00:00:00:778]
Sep 21, 2017 7:53:17 PM org.apache.ignite.logger.java.JavaLogger info
Info:
>>> +---------------------------------------------------------------------------------+
>>> Ignite ver. 2.1.0#20170721-sha1:a6ca5c8a97e9a4c9d73d40ce76d1504c14ba1940 stopped OK
>>> +---------------------------------------------------------------------------------+
>>> Grid uptime: 00:00:00:778
Ignite set example finished.
If and only if I set "collocated" of the CollectionConfiguration instance to true , the size of IgniteSet was 10 as expected. But according to the official documents, if there is lots of data in a IgniteSet then "false" is recommended configuration of "collocated" attribute. So what's wrong here?
You can pour your data with IgniteSet by client mode, I have tested and it proved true. like this: Ignition.setClientMode(true);
Looks like IgniteSet has a bug. Thank you for the report.
For now you can use cache directly instead of a set. The same example will look like this:
public class IgniteSetExample {
static final Object DUMMY = new Object();
public static void main(String[] args) throws Exception {
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");
CacheConfiguration<String, Object> cacheCfg = new CacheConfiguration<>("setCache");
cacheCfg.setAtomicityMode(TRANSACTIONAL);
cacheCfg.setCacheMode(PARTITIONED);
IgniteCache<String, Object> cache = ignite.getOrCreateCache(cacheCfg);
System.out.println("Set size before init: " + cache.size());
for (int i = 0; i < 10; i++) {
cache.put(Integer.toString(i), DUMMY);
System.out.println("Set elements: " + getKeys(cache));
}
System.out.println("Set size after init: " + cache.size());
}
static <T> List<T> getKeys(IgniteCache<T, ?> cache) {
List<T> keys = new ArrayList<>(cache.size());
for (Cache.Entry<T, ?> e : cache)
keys.add(e.getKey());
return keys;
}
}
Related
I have the following table of data (as Delta table, which is mapped as Hive table)
UtilEvents:
-----------------------------------------------------------------------------
SerialNumber EventTime UseCase RemoteHost RemoteIP
-----------------------------------------------------------------------------
131058 2022-12-02T00:31:29 Send Host1 RemoteIP1
131058 2022-12-21T00:33:24 Receive Host1 RemoteIP1
131058 2022-12-22T01:35:33 Send Host1 RemoteIP1
131058 2022-12-20T01:36:53 Receive Host1 RemoteIP1
131058 2022-12-11T00:33:28 Send Host2 RemoteIP2
131058 2022-12-15T00:35:18 Receive Host2 RemoteIP2
131058 2022-12-12T02:29:11 Send Host2 RemoteIP2
131058 2022-12-01T02:30:56 Receive Host2 RemoteIP2
I need a result set which is grouped by UseCase and RemoteHost, but with max value of EventTime.
So the result should look something like :
Result_UtilEvents:
----------------------------------------------------------------
SerialNumber EventTime UseCase RemoteHost
----------------------------------------------------------------
131058 2022-12-21T00:33:24 Receive Host1
131058 2022-12-22T01:35:33 Send Host1
131058 2022-12-15T00:35:18 Receive Host2
131058 2022-12-12T02:29:11 Send Host2
Could you suggest an efficient Databricks SQL Query which can give this result.
PS : Intermediate dataframe results can not be used in this case. It has to be in pure SQL format.
I think that you just need to group by and get max together with column on which you are grouping. I added SerialNumber to group by as it is not clear how to treat this column
import datetime
import pyspark.sql.functions as F
x = [
(131058, datetime.datetime(2022, 12, 2, 0, 31, 29), "Send", "Host1", "RemoteIP1"),
(131058, datetime.datetime(2022, 12, 21, 0, 33, 24), "Receive", "Host1", "RemoteIP1"),
(131058, datetime.datetime(2022, 12, 22, 1, 35, 33), "Send", "Host1", "RemoteIP1"),
(131058, datetime.datetime(2022, 12, 20, 1, 36, 53), "Receive", "Host1", "RemoteIP1"),
(131058, datetime.datetime(2022, 12, 11, 0, 33, 28), "Send", "Host2", "RemoteIP2"),
(131058, datetime.datetime(2022, 12, 15, 0, 35, 18), "Receive", "Host2", "RemoteIP2"),
(131058, datetime.datetime(2022, 12, 12, 2, 29, 11), "Send", "Host2", "RemoteIP2"),
(131058, datetime.datetime(2022, 12, 1, 2, 30, 56), "Receive", "Host2", "RemoteIP2")
]
df = spark.createDataFrame(x, schema=["SerialNumber", "EventTime", "UseCase", "RemoteHost", "RemoteIp"])
df.createOrReplaceTempView("test_table")
spark.sql(
"select SerialNumber, Max(EventTime) as EventTime, UseCase, RemoteHost "
"from test_table "
"group by SerialNumber, UseCase, RemoteHost"
).show()
output
+------------+-------------------+-------+----------+
|SerialNumber| EventTime|UseCase|RemoteHost|
+------------+-------------------+-------+----------+
| 131058|2022-12-22 01:35:33| Send| Host1|
| 131058|2022-12-21 00:33:24|Receive| Host1|
| 131058|2022-12-12 02:29:11| Send| Host2|
| 131058|2022-12-15 00:35:18|Receive| Host2|
+------------+-------------------+-------+----------+
I've one master(x.x.x.61), one volume(x.x.x.63) and one filer + s3API (x.x.x.62) setup on 3 separate machines.
I added a new volume server (x.x.x.64) because I've max out the storage space on the first volume server.
But I'm still not able to add new files on the filer UI(http://x.x.x.62:8888)
In my filer logs, I noticed that it's trying to connect to the first volume server IP address that's out of space. Am I missing a configuration for it to connect to the new volume server?
E1221 11:09:48.027930 upload_content.go:351 unmarshal http://x.x.x.63:8080/7,2bafadaa4666: {"error":"failed to write to local disk: write data/chrisDir_7.dat: no space left on device"}{"name":"app_progress4.apk","size":2353734,"eTag":"92b10892"}
W1221 11:09:48.027950 upload_content.go:168 uploading 2 to http://x.x.x.63:8080/7,2bafadaa4666: unmarshal http://x.x.x.63:8080/7,2bafadaa4666: invalid character '{' after top-level value
E1221 11:09:48.027965 filer_server_handlers_write_upload.go:209 upload error: unmarshal http://x.x.x.63:8080/7,2bafadaa4666: invalid character '{' after top-level value
I1221 11:09:48.028022 common.go:70 response method:POST URL:/buckets/chrisDir/ with httpStatus:500 and JSON:{"error":"unmarshal http://x.x.x.63:8080/2,2ba84b2894a7: invalid character '{' after top-level value"}
In the master log, I see that the second volume server was added successfully and master.toml file was executed to rebalance
I1221 11:36:09.522690 node.go:225 topo:DefaultDataCenter:DefaultRack adds child x.x.x.64:8080
I1221 11:36:09.522716 node.go:225 topo:DefaultDataCenter:DefaultRack:x.x.x.64:8080 adds child
I1221 11:36:09.522724 master_grpc_server.go:138 added volume server 0: x.x.x.64:8080 [3caad049-38a6-43f6-8192-d1082c5e838b]
I1221 11:36:09.522744 master_grpc_server.go:49 found new uuid:x.x.x.64:8080 [3caad049-38a6-43f6-8192-d1082c5e838b] , map[x.x.x.63:8080:[5005b287-c812-4dba-ba41-9b5a6a022f12] x.x.x.64:8080:[3caad049-38a6-43f6-8192-d1082c5e838b]]
I1221 11:36:09.522866 volume_layout.go:393 Volume 11 becomes writable
I1221 11:36:09.522880 master_grpc_server.go:199 master see new volume 11 from x.x.x.64:8080
I1221 11:38:33.481721 master_server.go:323 executing: lock []
I1221 11:38:33.482821 master_server.go:323 executing: ec.encode [-fullPercent=95 -quietFor=1h]
I1221 11:38:33.483925 master_server.go:323 executing: ec.rebuild [-force]
I1221 11:38:33.484372 master_server.go:323 executing: ec.balance [-force]
I1221 11:38:33.484777 master_server.go:323 executing: volume.balance [-force]
2022/12/21 11:38:48 copying volume 21 from x.x.x.63:8080 to x.x.x.64:8080
I1221 11:38:48.486778 volume_layout.go:407 Volume 21 has 0 replica, less than required 1
I1221 11:38:48.486798 volume_layout.go:380 Volume 21 becomes unwritable
I1221 11:38:48.494998 volume_layout.go:393 Volume 21 becomes writable
2022/12/21 11:38:48 tailing volume 21 from x.x.x.63:8080 to x.x.x.64:8080
2022/12/21 11:38:58 deleting volume 21 from x.x.x.63:8080
....
How I start master
./weed master -mdir='.'
How I start volume
./weed volume -max=100 -mserver="x.x.x.61:9333" -dir="$dataDir"
How I start filer and s3
./weed filer -master="x.x.x.61:9333" -s3
What's in $HOME/.seaweedfs
drwxrwxr-x 2 seaweedfs seaweedfs 4096 Dec 20 16:01 .
drwxr-xr-x 20 seaweedfs seaweedfs 4096 Dec 20 16:01 ..
-rw-r--r-- 1 seaweedfs seaweedfs 2234 Dec 20 15:57 master.toml
Content of master.toml file
# Put this file to one of the location, with descending priority
# ./master.toml
# $HOME/.seaweedfs/master.toml
# /etc/seaweedfs/master.toml
# this file is read by master
[master.maintenance]
# periodically run these scripts are the same as running them from 'weed shell'
scripts = """
lock
ec.encode -fullPercent=95 -quietFor=1h
ec.rebuild -force
ec.balance -force
volume.deleteEmpty -quietFor=24h -force
volume.balance -force
volume.fix.replication
s3.clean.uploads -timeAgo=24h
unlock
"""
sleep_minutes = 7 # sleep minutes between each script execution
[master.sequencer]
type = "raft" # Choose [raft|snowflake] type for storing the file id sequence
# when sequencer.type = snowflake, the snowflake id must be different from other masters
sequencer_snowflake_id = 0 # any number between 1~1023
# configurations for tiered cloud storage
# old volumes are transparently moved to cloud for cost efficiency
[storage.backend]
[storage.backend.s3.default]
enabled = false
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
region = "us-east-2"
bucket = "your_bucket_name" # an existing bucket
endpoint = ""
storage_class = "STANDARD_IA"
# create this number of logical volumes if no more writable volumes
# count_x means how many copies of data.
# e.g.:
# 000 has only one copy, copy_1
# 010 and 001 has two copies, copy_2
# 011 has only 3 copies, copy_3
[master.volume_growth]
copy_1 = 7 # create 1 x 7 = 7 actual volumes
copy_2 = 6 # create 2 x 6 = 12 actual volumes
copy_3 = 3 # create 3 x 3 = 9 actual volumes
copy_other = 1 # create n x 1 = n actual volumes
# configuration flags for replication
[master.replication]
# any replication counts should be considered minimums. If you specify 010 and
# have 3 different racks, that's still considered writable. Writes will still
# try to replicate to all available volumes. You should only use this option
# if you are doing your own replication or periodic sync of volumes.
treat_replication_as_minimums = false
System status
curl http://localhost:9333/dir/assign?pretty=y
{
"fid": "9,2bb2fd75d706",
"url": "x.x.x.63:8080",
"publicUrl": "x.x.x.63:8080",
"count": 1
}
curl http://x.x.x.61:9333/cluster/status?pretty=y
{
"IsLeader": true,
"Leader": "x.x.x.61:9333",
"MaxVolumeId": 21
}
curl "http://x.x.x.61:9333/dir/status?pretty=y"
{
"Topology": {
"Max": 200,
"Free": 179,
"DataCenters": [
{
"Id": "DefaultDataCenter",
"Racks": [
{
"Id": "DefaultRack",
"DataNodes": [
{
"Url": "x.x.x.63:8080",
"PublicUrl": "x.x.x.63:8080",
"Volumes": 20,
"EcShards": 0,
"Max": 100,
"VolumeIds": " 1-10 12-21"
},
{
"Url": "x.x.x.64:8080",
"PublicUrl": "x.x.x.64:8080",
"Volumes": 1,
"EcShards": 0,
"Max": 100,
"VolumeIds": " 11"
}
]
}
]
}
],
"Layouts": [
{
"replication": "000",
"ttl": "",
"writables": [
6,
1,
2,
7,
3,
4,
5
],
"collection": "chrisDir"
},
{
"replication": "000",
"ttl": "",
"writables": [
16,
19,
17,
21,
15,
18,
20
],
"collection": "chrisDir2"
},
{
"replication": "000",
"ttl": "",
"writables": [
8,
12,
13,
9,
14,
10,
11
],
"collection": ""
}
]
},
"Version": "30GB 3.37 438146249f50bf36b4c46ece02a430f44152777f"
}
I have current date and I have of list which is coming from server. I want to find all first nearest data.
"Results": [
{
"date": "May 9, 2020 8:09:03 PM",
"id": 1
},
{
"date": "Apr 14, 2020 8:09:03 PM",
"id": 2
},
{
"date": "Mar 15, 2020 8:09:03 PM",
"id": 3
},
{
"date": "May 9, 2020 8:19:03 PM",
"id": 4
}
],
Today date is Wed Jul 20 00:00:00 GMT+01:00 2022 I am getting through this my own StackOverflow. Inside this SO I am taking current date.
Expected Output
[Result(date=May 9, 2020 8:09:03 PM, id = 1), Result(date=May 9, 2020 8:19:03 PM, id = 4)]
So How can I do in idiomatic way in kotlin ?
There are quite a few ways this question could be solved with, ranging from simple to moderately complex depending on requirements such as efficiency. With some assumptions, below is a linear-time solution that is decently idiomatic and is only 3 lines in essence.
import kotlin.math.abs
import java.lang.Long.MAX_VALUE
import java.time.LocalDateTime
import java.time.temporal.ChronoUnit
import java.time.temporal.Temporal
data class Result(val date: LocalDateTime, val id: Int)
fun getClosestDays(toDate: Temporal, results: List<Result>): List<Result> {
// Find the minimum amount of days to the current date
var minimumDayCount = MAX_VALUE
results.forEach { minimumDayCount = minOf(minimumDayCount, abs(ChronoUnit.DAYS.between(toDate, it.date))) }
// Grab all results that match the minimum day count
return results.filter { abs(ChronoUnit.DAYS.between(toDate, it.date)) == minimumDayCount }
}
fun main() {
getClosestDays(
LocalDateTime.now(),
listOf(
Result(LocalDateTime.of(2020, 5, 9, 8, 9, 3), 1),
Result(LocalDateTime.of(2020, 4, 14, 8, 9, 3), 2),
Result(LocalDateTime.of(2020, 3, 15, 8, 9, 3), 3),
Result(LocalDateTime.of(2020, 5, 9, 8, 19, 3), 4)
)
).also { println(it) }
}
Here is the output:
[Result(date=2020-05-09T08:09:03, id=1), Result(date=2020-05-09T08:19:03, id=4)]
And here you can play with it yourself.
val primes = generateSequence(2 to generateSequence(3) {it + 2}) {
val currSeq = it.second.iterator()
val nextPrime = currSeq.next()
nextPrime to currSeq.asSequence().filter { it % nextPrime != 0}
}.map {it.first}
println(primes.take(10).toList()) // prints [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
I tried to understand this function about how it works, but not easy to me.
Could someone explain how it works? Thanks.
It generates an infinite sequence of primes using the "Sieve of Eratosthenes" (see here: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes).
This implementation uses a sequence of pairs to do this. The first element of every pair is the current prime, and the second element is a sequence of integers larger than that prime which is not divisible by any previous prime.
It starts with the pair 2 to [3, 5, 7, 9, 11, 13, 15, 17, ...], which is given by 2 to generateSequence(3) { it + 2 }.
Using this pair, we create the next pair of the sequence by taking the first element of the sequence (which is now 3), and then removing all numbers divisible by 3 from the sequence (removing 9, 15, 21 and so on). This gives us this pair: 3 to [5, 7, 11, 13, 17, ...]. Repeating this pattern will give us all primes.
After creating a sequence of pairs like this, we are finally doing .map { it.first } to pick only the actual primes, and not the inner sequences.
The sequence of pairs will evolve like this:
2 to [3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, ...]
3 to [5, 7, 11, 13, 17, 19, 23, 25, 29, ...]
5 to [7, 11, 13, 17, 19, 23, 29, ...]
7 to [11, 13, 17, 19, 23, 29, ...]
11 to [13, 17, 19, 23, 29, ...]
13 to [17, 19, 23, 29, ...]
// and so on
I've data for ReactNativeWheelPicker which looks like this:
const hoursData = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12];
and than picker receives these data: data={hoursData}
I tried to use following method to convert e.g. 1 to 01:
(hoursData < 10 ? "0" + hoursData : hoursData)
Unfortunately, wherever I put this, wheel picker always shows single number.
I'm using wheel picker from this repo: https://github.com/ElekenAgency/ReactNativeWheelPicker
Any suggestions would be appreciated ;)
EDIT (updated full code):
const hoursData = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23];
class TimePickerMenu extends Component {
constructor(props) {
super(props);
this.state = {
selectedHours: 9,
};
}
}
render() {
return (
<View style={styles.rowPicker}>
<WheelPicker
onItemSelected={ (event)=> this.setState({ index: event.position, selectedHours: event.data }) }
isCurved
isCyclic
data={hoursData}
style={styles.wheelPicker}
/>
</View>
);
}
}
You need to do this
const hoursData = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
const data = hoursData.map(data => {
if (data < 10) {
return '0' + data
}
return '' + data
})