Cucumber Step definition failing - ruby-on-rails-3

I have a step definition that is failing.
From the feature below i have it failing at 'And that feature has a issue' (undefined local variable at title)
Feature: Viewing issue
In order to view the issues for a feature
As a user
I want to see them on that feature's page
Background:
Given there is a release called "Confluence"
And that release has a feature:
| title | description |
| Make it shiny! | Gradients! Starbursts! Oh my! |
And that feature has a issue:
| title | description |
| First Issue | This is a first issue. |
And I am on the homepage
Scenario: Viewing issue for a given feature
When I follow "Confluence"
Then I should see "Standards compliance"
When I follow "Standards compliance"
Then I should see "First Issue"
And I should see "This is a first issue."
How do i write a step definition for it guys.
this is what i have for the features definition and it works great but i've tried doing the same for issues objects and it doesn't work
Given /^that release has a feature:$/ do |table|
table.hashes.each do |attributes|
#release.features.create!(attributes)
end
end

You shouldn't use instance variables in your steps, as a best practice. It makes them dependent on each other.
I'd write your steps like this (in psuedo-ish code):
Given there is a release called <name>
# create model
Given release <name> has a feature <table>
release = Release.find_by_name(<name>)
# rest of your code here is fine, but reference 'release' instead of '#release'
Given release <name>'s <feature_name> has issues <table>
release = Release.find_by_name(<name>)
feature = release.features.find_by_name(<feature_name>)
table.hashes.each do |attributes|
# create issue
end
Then your Cucumber feature would read better like this:
Given there is a release called "Confluence"
And release "Confluence" has a feature:
| title | description |
| Make it shiny! | Gradients! Starbursts! Oh my! |
And release "Confluence"'s "Make it shiny!" feature has issues
| title | description |
| First Issue | This is a first issue. |

Related

How do I bring this up to date (CrystalDecisions.Windows.Forms.CrystalReportViewer)?

Me.rpt_Viewer.DisplayGroupTree = False
This generates a warning saying that DisplayGroupTree is obsolete
+----------+---------+----------------------------------------------------------------------+-------------------------------------+------+-------------------+
| Severity | Code | Description | Project File | Line | Suppression State |
+----------+---------+----------------------------------------------------------------------+-------------------------------------+------+-------------------+
| Warning | BC40008 | 'Public Overloads Property DisplayGroupTree As Boolean' is obsolete. | ProjectName C:\Projects\frm_Main.vb | 116 | Active |
+----------+---------+----------------------------------------------------------------------+-------------------------------------+------+-------------------+
I spent hours googling this and many posts say exactly what I know, that it's obsolete.
Answer from
https://answers.sap.com/questions/4570064/displaygrouptree-property-is-obsolete---crystalrep.html
I'm assuming you're using Crystal Reports 2008 on Visual Studio 2008, and not Crystal Reports Basic that comes with Visual Studio 2008. For the former, the DisplayGroupTree is obsoleted, but not for the latter.
Crystal Reports 2008 has a new Parameter Panel, so the left-hand pane can contain either the Parameter Panel or the Group Tree Panel. Since it's no longer a binary choice, the DisplayGroupTree was obsoleted.
Set the property ToolPanelView to either None (or null), ToolPanelViewType.GroupTree or ToolPanelViewType.ParameterPanel.
CrystalDecisions.Windows.Forms.CrystalReportViewer viewer;
viewer.ToolPanelView = CrystalDecisions.Windows.Forms.ToolPanelViewType.None;

Using hexagonal architecture in embedded systems

I'm trying to work out how the hexagonal (ports and adapters) architecture might be used in the context of an embedded software system.
if I understand right, the architecture is something like this.
/-----------------\ /-----------------------------\
| | | |
| Application | | Domain |
| | | |
| +----------+ | | +---------+ |
| | +-------------->|interface| | /-------------------\
| +----------+ | | +---------+ | | |
| | | ^ | | Infrastructure |
| | | | | | |
\---------------+-/ | +---+---+ +---------+ | | +----------+ |
| | +---->|interface|<-------------+ | |
Code that allows | +-------+ +---------+ | | +----------+ |
interaction with | | | |
user \--------------------------+--/ \-----------------+-/
Business logic What we (the business)
depend on - persistence,
crypto services etc
Let's take a concrete example of where one of the user interfaces is a touch screen that the main controller talks to over a serial UART. The software sends control commands to draw elements on the screen and the user actions generate text messages.
The bits I see working in this scenario are:
Serial driver
Sends data over the UART
Receives data (an ISR is invoked)
Screen Command builder
Screen Response/Event parser
Business logic such as presenting and responding to menus, widgets etc
The bit I'm struggling with is where these pieces should reside. Intuitively, I feel it's as follows:
Infrastructure - UART driver
Domain - Business logic
Application - Message builder/parser
But this arrangement forces a dependency between Application and Infrastructure where the parser needs to retrieve the data and the builder needs to send the data through the UART.
Bringing the messages builder and parser to Infrastructure or Domain takes the whole user interaction thing away from the Application.
Whichever way I look at it, it seems to violate some aspect of the diagram that I drew above. Any suggestions?

Issue on connecting to the Image Acquisition Device using HALCON

My setup includes a POE camera connected directly to my computer on which I have HDevelop. From the past few days I am running into a problem wherein the first attempt to connect to the camera using HDevelop fails.
When using Connect from the Image Acquisition GUI, I get an error stating "HALCON ERROR. Image acquisition: device cannot be initialized"
When using the open_framegrabber() method from the Program Console, I get a the same error as well, with the addition of the HALCON error code:5312
After I get this error, attempting the connection again, it succeeds. This is the workaround I have at the moment, but its annoying as it keeps repeating quite frequently and I am not sure what is the cause for this issue. I tried pinging my camera from the command prompt which did not show any ping losses. And using the camera from VIMBA viewer I do not get such connection issues.
I know this is not a support site where I should be asking such questions, but if anyone can give me some inputs on solving this, it would be of great help.
Regards,
Sanjay
To solve your question is important to understand the HALCON Framegrabber communication object, I assume that you are coding in HDev code structure.
To create a communication channel with the camera on the proper way, avoiding to reject the connection (due to parameter miss-configuration), you have to specify the camera device ID on the framegrabber creation, and avoid to use default options.
In order to consult, according to your communication protocol, the available devices conected to your board, use:
info_framegrabber('GigEVision2', 'info_boards', Information, ValueList)
where,
The first parameter is the communication protocol and ValueList will throw all the information of the connected devices with a token:param splitted by '|'
i.e
| device:ac4ffc00d5db_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc00d5db_SVSVISTEKGmbH_eco274MVGE67 | interface:Esen_ITF_78d004253353c0a80364ffffff00 | producer:Esen | vendor:SVS-VISTEK GmbH | model:eco274MVGE67 | tl_type:GEV | device_ip:192.168.3.101/24 | interface_ip:192.168.3.100/24 | status:busy | device:ac4ffc009cae_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc009cae_SVSVISTEKGmbH_eco274MVGE67 | interface:Esen_ITF_78d004253354c0a80264ffffff00 | producer:Esen | vendor:SVS-VISTEK GmbH | model:eco274MVGE67 | tl_type:GEV | device_ip:192.168.2.101/24 | interface_ip:192.168.2.100/24 | status:busy | device:ac4ffc009dc6_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc009dc6_SVSVISTEKGmbH_eco274MV
......... and going
In this way you can cast the device ID (device:) automatically, and put this parameter on your framegrabber creation.
open_framegrabber ('GigEVision2', 0, 0, 0, 0, 0, 0, 'default', -1, 'default', -1, 'false', 'here piut the device ID', '', -1, -1, AcqHandle)
At the end you will be able to do a direct connection or create a automatically re-connection routine.
I hope this information helps you.

Mixing fanout and direct exchanges with AMQP

I have two kinds of workers for a same event.
I would like a message be dispatched to only one among some of my workers (like "direct" exchanges). But the other workers should all process the message (like fanout).
It's a bit hard to explain but the idea is here. And maybe the following schema will help you to understand what I would like.
Do you have a solution?
Kind regards,
Ben
If I understand you correctly, you would like to have "type-1" as worker where only a single worker work on an item while "type-2" can be treated as multiple handlers (like log-handlers) where all should accept the event.
If i'm right , then you might be able to chain two queues (exchanges) .
exchange1 - fanout - all the "type-2" (loggers) will wait here so they will all get the event
exchange2 - direct - all your "type-1" will wait here so only one will get the event
the trick - you need to make sure that you have a consumer listening on "exchange1" that will also publish to "exchange2".
your best option is to use routing keys with multiple bindings between your exchange and your queues.
i would recommend either direct or topic exchange for this, but not fanout.
to model your example image above, your configuration would look like this:
| exchange | routing key | queue |
|----------|-------------|---------|
| some.ex | type.1 | queue.1 |
| some.ex | type.1 | queue.2 |
| some.ex | type.1 | queue.3 |
| some.ex | type.2 | queue.4 |
| some.ex | type.2 | queue.5 |
basically, you need to have a routing key per queue and a queue per worker.
you may want to read up a bit more on exchanges, queues and bindings, to get a better understanding of when they would be used. i have a few ebooks that cover this (along with other RMQ usage scenarios) at https://leanpub.com/u/derickbailey

Geode Redis Adaptor

Hi all, hoping someone can assist me with some queries/configuration for the use of the Geode Redis Adapter. I'm having some difficulty ascertaining how/whether I can configure a number of Redis servers within my Geode cluster to function in a high availability setup.
I'm very new to Geode, but understand that in a traditional Geode application, the client interacts with a locator process to access data from the cluster and balance load. Given that the aim of this adapter is to function as a drop-in replacement for Redis (i.e. no change required on the client) I imagine it functions somewhat differently.
Here is what I have tried so far:
I have built from source according to this link and successfully got the gfsh cli up on 2 CentOS 7 VMs:
192.168.0.10: host1
192.168.0.15: host2
On host1, I run the following commands:
gfsh>start locator --name=locator1 --bind-address=192.168.0.10 --port=10334
gfsh>start server --name=redis --redis-bind-address=192.168.0.10 --redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_REDUNDANT
On host2, I run the following command:
gfsh>start server --name=redis2 --redis-bind-address=192.168.0.15 --redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_REDUNDANT --locators=192.168.0.10[10334]
On host1, I examine the current configuration:
gfsh>list members
Name | Id
-------- | -------------------------------------------------
locator1 | 192.168.0.10(locator1:16629:locator)<ec><v0>:1024
redis2 | 192.168.0.15(redis2:6022)<ec><v2>:1024
redis | 192.168.0.10(redis:16720)<ec><v1>:1025
gfsh>list regions
List of regions
-----------------
__HlL
__ReDiS_MeTa_DaTa
__StRiNgS
For each of the regions, I can see both server redis & redis2 as Hosting Members - e.g.
gfsh>describe region --name=__StRiNgS
..........................................................
Name : __StRiNgS
Data Policy : normal
Hosting Members : redis2
redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----- | -----
Region | size | 0
| scope | local
At this point, I turned to the redis-cli for some testing. Given the previous output, my expectation was that if I set a key on one server, I should be able to read it back from the other server:
192.168.0.10:11211> set foo 'bar'
192.168.0.10:11211> get foo
"bar"
192.168.0.15:11211> get foo
(nil)
gfsh>describe region --name=__StRiNgS
..........................................................
Name : __StRiNgS
Data Policy : normal
Hosting Members : redis2
redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----- | -----
Region | scope | local
Non-Default Attributes Specific To The Hosting Members
Member | Type | Name | Value
------ | ------ | ---- | -----
redis2 | Region | size | 0
redis | Region | size | 1
As you can see, a query against host2 for the key added on host1 returned (nil). I'd greatly appreciate any help here. Is it possible achieve what I'm aiming for here or does the Redis adapter only allow you to scale out a single server?
This may not be an answer, but it is probably too long for a comment.
I am not familiar with the specific Geode Redis Adapter you are talking about here. But from my experience with Gemfire/Geode, there are things you may want to check:
You started the first host without locators param, I am not sure whether this will cause any problem with cluster formation. There are two ways in Gemfire to form a cluster: by specifying mcast port or by specifying locators.
Scope of the region you are inspecting looks wrong. "local" will not replicate any updates. When you set it up correctly, it should show up as DISTRIBUTED_NO_ACK / DISTRIBUTED_ACK / GLOBAL I suppose.
Hope this helps
Xiawei is correct, a scope "local" regions will not replicate the entry on other members. The workaround for this could have been to just create a region named __StRiNgS from gfsh, but since region names starting with two underscores are for internal use only, that's not possible.
I have filed this issue https://issues.apache.org/jira/browse/GEODE-1921 to fix the problem. I also attached a patch for this issue. With the patch applied I see that the __StRiNgS region now is PARTITION.
gfsh>start locator --name=locator1
gfsh>start server --name=redis --redis-port=11211
gfsh>list regions
List of regions
-----------------
HlL
StRiNgS
__ReDiS_MeTa_DaTa
gfsh>describe region --name=/StRiNgS
..........................................................
Name : StRiNgS
Data Policy : partition
Hosting Members : redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
On host1:
start locator --name=locator1 --bind-address=192.168.0.10 --port=10334
start server --name=redis --redis-bind-address=192.168.0.10 --redis-port=11211 --J=-Dgemfireredis.regiontype=REPLICATE
NOTE: Yoy have to use regiontype as "REPLICATE" if you wish the data to be replicated from one region to another.
On host2:
start server --name=redis2 --redis-bind-address=192.168.0.15 --redis-port=11211 --J=-Dgemfireredis.regiontype=REPLICATE --locators=192.168.0.10[10334]
https://geode.apache.org/docs/guide/11/developing/region_options/region_types.html