Bit of a random question :). I'm running a few Steam Team Fortress 2 (TF2) idle accounts to acquire items for the production of metal.
I've set up a few bash scripts to connect each account a couple of hours a day through the night. What I've found over the last couple of years that various things will cause the automatic account logins to fail. Which I wouldn't normally notice until I decide to look at the server, which I do rarely.
So I thought one way to ensure thing were working correctly would be to write a script that would log into each account (say daily) and list/count the number items it has. Log it have something like Splunk pick it up (which I already have running for other stuff).
So after that long winded explanation, my question is, does anyone know how to write a script that can retrieve the item information from a TF2 account. My current bash scripts can perform the log in to Steam and can start up TF2, but I have no idea if that is the correct/best way to retrieve item info or even if I can be done from the same bash script used to log in.
Happy to use any language, but do have a fondness for Python.
Thanks.
Valve has released a web api which provides a flexible way to query your items from outside the game. First, grab an api key by following the instructions at http://steamcommunity.com/dev.
Next, in your script, fetch http://api.steampowered.com/IEconItems_440/GetPlayerItems/v0001/?key=API_KEY&steamid=STEAMID where API_KEY and STEAMID are your api key and 64-bit steam id respectively. This returns a JSON file which contains a list of all the items in your inventory. Just grab the size of the items array.
Related
I need advice how to plan to design a project. My assignment is to take data from user through webpage and pass the data as an input to a program that should run locally on the computer. I imagine it as an application (to be installed), where user would type input into specific boxes and then after clicking run, the data would be additionally modified (to ensure correctness of input and some basic transformations) go to the program (written in Java) and after the result is computed then the results would be used for another computation to make the results easily readable (depicted in graphs, known distributions, some intervals etc.), and displayed to the user (also in the form of webpage).
I am not sure what is the process, what are the programs that I need to use (preferring open source and for free) for development of such a thing. I have not done this before and I need the advice what are the topics and areas that I need to do the research in.
I imagine steps similar to those.
have html page with CSS, containing JS functionalities
(input boxes and interactive boxes for commands, such as send data)
make connection between html page and local server
send data from html form into program running on the server
let the program compute result from html input
process the result into user friendly format
send the data to be displayed on html page
(graphically readable)
get to the html page in step 1.
Is something like this functional? How can I make the Java program run locally? Friend of mine mentioned to have it run on a port (how can I accomplish that)?
I can not have it run on internet, it needs to be as secure as possible (distributed to people but used separately). I have read about sockets, would that be good use in this case?
I would really appreciate tips how to proceed further.
I am using selenium with cucumber (using JAVA, but not much relevant)
Let's say I have following scenarios:
Feature: Sample Feature
Scenario: do action A on website Given website is opened And
user put correct login and pass in fields And user press login
Then do action A
Scenario: do action A on website Given website is opened And
user put correct login and pass in fields And user press login
Then do action B
Now, there will be hundreds of scenarios, and website always require to log in into the website, so I assume for each test scenario I will have to repeat login steps (for example by BACKGROUND or before scenario hook)
I have been reading that this sort of tests should be autonomous, so there should be no sharing of instance of webdriver between scenarios
Say:
Feature: Some feature
Scenario: Log into website first
Steps...
Scenario: Do action A (while we are logged already
Steps...
Scenario Do action B (all the time in same browser instance we used in
login step and action A step
Steps...
But I found people saying its not correct way, but repeating login procedure everytime I want to perform some test scenario takes a lot of time during runing many scenarios and each needs to log in first. I was thinking about enabling possibility to access website without login for testing purpose, is there any recommended approach? Thank you.
Every scenario that requires a user to be logged in will need to have the user log in. This is part of the cost of running at the integration level. However log in should not be an expensive time consuming operation, you only have to fill in two fields and submit them. It should take < 100ms to process the login.
Now for unit testing this time is huge, but for an integration test, which by its nature involves a much bigger stack and usually simulated human interaction (otherwise why do you need your user to login) this time is a relatively small component of the overall scenario run time.
Because Cucumber works at the integration level it is best not to use it as a testing tool, rather it should be used as a tool to drive development. Instead of writing thousands of small assertions (like you might when unit testing) you need to write fewer larger scenarios i.e. each scenario needs to do more. As each scenario is doing more the need for each scenario to be completely independent of any other scenario increases (the more you do, the more likely you are to be side effects on other things that are done). Sharing sessions and trying to avoid resetting the db and session between each scenario turns out to be a false optimization that creates more problems than it solves.
Its perfectly fine for a scenario to do alot before you get to its when. For example imagine the following e-commerce scenario.
Scenario: Re-order favorite
Given I have a favorite order
When I view my orders
And I re-order my favorite order
Then I should be taken to the checkout
And my favourite items should be in the basket
Now clearly an awful lot of stuff needs to happen before I can re-order e.g.
I need to register
I need to make at least one previous order
I need to choose a favorite order
and of course there are lots of other things like
there need to be products to be ordered
All of this means that this scenario will take time to run, but thats OK because you are getting alot of functionality from it. (when I wrote something similar a long time ago, the scenario took 1-2 seconds to run). The login time for this sort of scenario is trivial compared to the time required to do the rest of the setup.
I know nothing about Selenium with cucumber (but i like cucumber :-)
I'm from Selenium for Python. There I can do the following things:
from selenium import webdriver
profile = webdriver.FirefoxProfile(your_path_to_local_firefox_profile)
# like C:/Users/<USERNAME>/AppData/Roaming/Mozilla/Firefox/Profiles/<PROFILE_FOLDER>
browser = webdriver.Firefox(profile)
So, now with "[WIN] + [R]" -> Run -> "firefox.exe -p" I can create an extra profile for Selenium to use it in the code above, so I can use Firefox as well start with the profile on a trial basis. ALSO If your website with login you want to automate, cookies & cache etc. supports, then it could be that you do not have to log in via the firefox profile every time, but that the Firefox starting each time automatically logs in because he stored the login data.
I do not know if that helps, but I wanted to tell you.
I am working on a labour management service that allows it's user to enter their timesheets, allows manager to modify those timesheets and sync attendance over web connectors or web services. Now, we need to have a feature that sends users email if there total week hours are more than weekly hours limit. It's event driven like as soon as added shifts for the user is more than limit, send an email unlike scheduler jobs.
I'd like to know some best practices that I can use to achieve this. I don't want to get into my existing code and add number of lines in existing code, I'd like to accomplish it with as little as possible change in existing code base and add code only related to trigger an event and rest event handler should be taking care of sending emails.
Techs used:
Spring4, Java7
PS:
I know it's like a story and it may not suited for StackOverflow, please feel free to move this question to a place in SO network where it fits well.
I need to create a user journey such as :
User is on the home page --> randomly clicks on particular item --> views the item stays for about 10 seconds --> then again goes back and clicks on another random item.
how do i generate a test script using jython? I am using Grinder tool
Shashank,
I think your question is too broad to get a detailed response. If you ask a more specific question I think you will get a better answer.
I would say there are two general approaches available for doing what you want, and you could have success with either option:
Write your test directly in Jython. The script documentation (http://grinder.sourceforge.net/g3/scripts.html) and script gallery (http://grinder.sourceforge.net/g3/script-gallery.html) will be helpful to you in this.
Using the HTTP proxy, perform the actions in a web browser that you wish your test to perform. (http://grinder.sourceforge.net/g3/tcpproxy.html) The proxy will capture your browser actions and convert them into a Jython script. After this happens, a small amount of additional work will be required on your part to modify the generated script to detect the list of items available and randomly select one.
My own personal preference is to code up the Grinder scripts from scratch. YMMV.
I searched in google and stackoverflow for my problem, but couldn't find a good solution. Below is the description,
Our Java web application displays search results from our local database and external webservice API calls. So, the search logic should combine these results and display it in the result page. The problem is, the external API calls return the results slower than our local DB calls. Performance is crucial for our search results and the results should be live i.e. we should not cache or persist the external results in our local DB. Right now, we are spanning two threads, one for the DB call and another one for the exteral API, and combine these results and display it on the screen. But it kills the performance of our application, particularly when we call more than one external APIs.
Is there any architectural solution for this problem?
Any help would be greatly appreciated.
Thanks.
You cannot display data before you have it.
1) You can display your local data and as they come, add via ajax other data.
2) And if there are repeated questions, you could cache external answers for short time (and display them with warning that they are old and that they will be replaced by fresh answer) and as soon as fresh anwer arrive, push new answer.
With at least 1), system will be responsive, with 2) usable answer can be available imediately, even if its not current.
btw, if external source take long to answer, are you sure that their answer is not stale (eg. if they gather some data and wait for rest, then what they gathered so far can go stale)? So maybe (and maybe not) short term persisting is not as bad as you think.