| 1. |
Solve : JitBit Easy Macro Creator - Playback faster than recorded keyboard/mouse? |
|
Answer» I have been using this JitBit Easy Macro Recorder http://www.jitbit.com/macro-recorder/ for about 6 YEARS now on slower ( Pentium 4 -> Core 2 Duo ) computers to run quick to create and use automated keyboard/mouse routines. BUT I thought there should be a real-time timer that should keep it all in sync no matter the speed of the computer. Many years ago before real-time timers the faster the computer the faster the program executed such as games that would start and Pac-Man ghosts would immediately catch Pac-Man upon starting an 8088 8-bit DOS game on a 486DX33Mhz.That wasn't quite how it worked. the original IBM PC did have a rather accurate Real Time Clock. Obviously the main flaw of it being that it was reset; but the timer was still accurate and didn't differ between systems. It is actually an issue with how the games were programs. Programmers usually only testedo n one machine, so if it was too fast, they just stuck a loop that executed an arbitrary number of times until the speed was reasonable. Of course on later machines, those loops were often inconsequential, and the entire thing ran far faster than desired. eg. even then the programmers could have programmed it to only ever run the game at a maximum of, say, 30 fps, but instead they designed it to perform, say, 50000 empty loops each game "tick". This worked to make the game playable on the machines they tested it on, but new machines executed all instructions faster and those 50,000 empty loops that cause a 50ms "delay" to make the game playable would become shorter and shorter until within maybe 2 CPU generations it was essentially gone. However, turning to this programs issues. Normally, the way a recorder of this sort would work is typically to record Actions, and then play them back. This is naturally kind of obvious... but the way this works is dependent on the Win32 API Function, SendInput. SendInput basically takes a group of events. These events include information, such as the specific time of the event. HOWEVER: One caveat of the API is that, while many "Recorder" programs seem to think so, the SendInput API does not "delay" any input. In fact, Every single message is posted to the active WINDOW or where needed (faked mouse movements, keyboard information, etc) the "time" information is send as part of the message, but very few applications really care about that. To add to that, a lot of Recorder Programs use a direct offset (for example, their first input might have 0 as the time field, the next 500, etc.) but the time field is actually the complete time of the event. Depending on the program, playback might be faster because the application is using the older "idle loop" method, and was really only tested on one or a few machines on which it works properly. Thanks BC for the info. I remember adding loops in Basic many years ago to slow down some feaures of games etc where you wanted to see the transition of movement vs a flash across the screen etc, so that for example my first game I ever made in 1987 called Lunar Escape showed an "A" as a ship land on a moon made up with a bunch of "*" in a circle formation to be as simple as can be as ASCII before I learned how to play with sprites etc when taking Basic in high school in 1990 and tapping the teacher foreverything they could share with me on Basic until I exhausted everything they could teach me and moved on from there to my own research to learn more. Digging online I found some interesting info regarding timers and multicore CPU's such as: Quote On Windows machines you can use QueryPerformanceCounter() and QueryPerformanceFrequency() to get accurate timing. Note that on dual-core AMDs you will also need to use SetThreadAffinityMask() or risk getting some strange results. from: http://cboard.cprogramming.com/cplusplus-programming/128902-keeping-program-running-constant-speed.html My CPU is an AMD Athlon II x4 620 2.6Ghz .... so the mention about "Note that on dual-core AMDs you will also need to use SetThreadAffinityMask() or risk getting some strange results." Was rather interesting as for all other systems I have used the software on in the past have been Intel CPU's or single-core AMD's like Athlon XP 2800+ and Sempron 145 But as far as Real-Time Timers, it was my understanding that programmers used for time sensitive programs like games etc starting in the mid to late 1990's, what were almost like checkpoints in the programming in relation to system time in seconds to keep programs at bay to run at what could be perceived as a constant to the eye with dynamic cpu cycle wasting loops (or) dynamic sleep duration timers that can be adjusted to ms precision on the fly so that the same game runs at the same speed no matter if its a C++ created program run on a Pentium II 450Mhz or a Core i7. ( But eventually, I could see you reaching even with a dynamic delay a limit to how much you can continue to slow a program among ever faster hardware, especially as the OS will continue to move forward to support the hardware and eventually you would reach a point of incompatability for lack of supporting DLL's etc. ) I suppose what program was ever intended to last forever...LOL ( Maybe Real-Time Timer is the incorrect statement for programs that run at the same rate among different hardware...) I am going to contact JitBit and see what they suggest to keep it running at the same rate as recorded if the XP compatability mode doesnt cure it. And until the rate of execution can stay at a constant and not run faster than originally recorded, I do have an older 2.4Ghz Intel Core 2 Duo that it runs fine on and can use that. Quote from: DaveLembke on November 27, 2012, 03:36:36 AM Thanks BC for the info.Yes. That is what I mean. And, to be honest, that is the wrong way to do it, and it was the wrong way to do it then as well. a better way would be to implement a delay. for example, we could have this one. This is probably not runnable since I don't 100% remember everything about the old BASIC. I'm not sure if the older BASIC supported SCREEN 13 (320x240 with 256 colour) either. Code: [Select]50 CURRX% = 0 75 CURRY% = 120 80 SCREEN 13 100 REM --START OF GAME LOOP-- 150 CLS 200 CIRCLE (CURRX%,CURRY%),80 300 CURRX% = (CURRX%+1) MOD 320,120 400 REM Fake 'Timer' to slow down 500 I=0 600 IF I=1000 GOTO 100 700 I=I+1 800 GOTO 400 This (probably doesn't work, but let's assume it does for the sake of argument) would move a circle from the left side of the screen to the right side. The actual rate would be timing dependent. This is, basically, Wrong. You won't see anything at all on anything faster than a 8088 with that delay, so you would either need some sort of freaky database to find out how fast each machine is, calculate it using some weird algorithm, or just give up and let your programs remain unrunnable and open the market for products like MoSlo. Alternatively, you could use TIMER: Code: [Select]50 CURRX% = 0 75 CURRY% = 120 80 SCREEN 13 100 REM --START OF GAME LOOP-- 150 CLS 200 CIRCLE (CURRX%,CURRY%),80 300 CURRX% = (CURRX%+1) MOD 320,120 400 TIMESTART=TIMER 500 IF TIMER-TIMESTART > (1/20) THEN GOTO 100 600 GOTO 500 Again, untested, dunno if it works, etc. Basically this just uses the TIMER function, which uses the computer's built in RTC. maybe 10 percent of games written back then actually used this approach; most of them, sadly, used the "just have an idle loop" method. Others took the idle loop method and have weird algorithms to get the delay to something that will work on the machine, these often crash with Divide Overflow errors on faster machines, since they usually rely on getting the time it takes to perform a certain number of iterations and use that as a divisor, of course on fast machines the difference can actually be zero. Quote My CPU is an AMD Athlon II x4 620 2.6Ghz .... so the mention about "Note that on dual-core AMDs you will also need to use SetThreadAffinityMask() or risk getting some strange results."The problem is a result of some Power Management feature of the Athlon. I'm not 100% and this is mostly a guess (educated or not, take your pick ), but the process for using the function is to use QueryPerformanceFrequency, and then use QueryPerformanceCounter. However, it's possible that the program/thread is running on a core that is "speedstepped" down (or whatever the AMD tech is called); this results in the Frequency being lower, since it's clocked down. But then in the meantime, the program might be switched to the other core after a context switch, so QueryPerformanceCounter() is run on the other one; this results in odd results, because you are practically trying to compare the timing and clock frequency values for two cores, and they aren't necessarily running at the same clock speed. I don't think this is a problem for later AMD CPUs or Intel CPUs. Quote But as far as Real-Time Timers, it was my understanding that programmers used for time sensitive programs like games etc starting in the mid to late 1990's, what were almost like checkpoints in the programming in relation to system time in seconds to keep programs at bay to run at what could be perceived as a constant to the eye with dynamic cpu cycle wasting loops (or) dynamic sleep duration timers that can be adjusted to ms precision on the fly so that the same game runs at the same speed no matter if its a C++ created program run on a Pentium II 450Mhz or a Core i7. ( But eventually, I could see you reaching even with a dynamic delay a limit to how much you can continue to slow a program among ever faster hardware, especially as the OS will continue to move forward to support the hardware and eventually you would reach a point of incompatability for lack of supporting DLL's etc. ) I suppose what program was ever intended to last forever...LOLBSD still contains large portions of the original UNIX codebase and it still works. I know you mean this in jest, but the "It works now" attitude is actually the reason for a lot of problems in the software industry. For example the original decision to use only 2 digits in the year was made when the year 2000 was the distant future. Quote ( Maybe Real-Time Timer is the incorrect statement for programs that run at the same rate among different hardware...)They run at the same rate because they use the System's RTC. Rather than waiting, say, 1000 iterations, they actually wait 1/20'th of a second (or whatever). For example, you can run the DOS version of DOOM on a modern PC (booting into DOS mode), and it will not run faster than 30fps. The game is capped to never run faster than 30 fps, and if it runs slower than 30fps, timing still works; eg at 15fps gameplay isn't half speed, but rather you simply have fewer frames per second. Much like games today. (well, most games today). Quote I am going to contact JitBit and see what they suggest to keep it running at the same rate as recorded if the XP compatability mode doesnt cure it. And until the rate of execution can stay at a constant and not run faster than originally recorded, I do have an older 2.4Ghz Intel Core 2 Duo that it runs fine on and can use that. If you are using a multi-core machine, you might try disabling any power saving CPU features. That is what causes the issue (I think) with the AMD CPUs mentioned, so it might still be a problem with modern machines.Thanks BC for getting down to the specifics and taking the time to code up old basic for example purposes. I very rarely work with old Basic anymore as for C++ for stand alone programs and perl for quick scripts has been my current choice of language or scripting. Going to try the disabling of the power saving CPU features and see what happens. I know this is enabled because I have a desktop gadget that shows the CPU status of cores, memory use, and clock rate and it doesnt run at 2600Mhz all the time, but speed-stepped to slower to like 800Mhz when idle with Win 7. For the fact that the stepping changes on the fly from 800Mhz, 1200Mhz, 1600Mhz, 2600Mhz ( or some similar stepped rates ). I never even thought about the speed-stepping as part of the equation that could influence the execution of the macros timing. Was thinking that if it calls for more processing power it gets what it wants without starving for CPU cycles or being fead too many. Just like the other processes on the fly get what they need for processing power to get the job done quick and then go back to a processing crawl until summoned again to crunch something else. In the meantime I created a quick macro in relation to that game on my netbook running Atom D510 dual core 1.66Ghz with 1GB Ram and Windows XP Home SP3 and everything is running at the correct speed in relation to the rate at which it was recorded at. The only thing that was almost a problem was accessing everything within that small 10.1" display without adding scroll up/down to the routine. Found a position that worked and marked an index card to the browsers scroll down slider bar position of Firefox for proper alignment for the next time the macro is run, the point and click routines line up with the objects on the screen in the game that it has to select/target. While some people would probably just move to another computer that behaves such as the netbook. I am still going to see if shutting off the CPU power saving features fixes it in case there is an application that needs the more powerful CPU in the future, I might then have a solution for it in which it behaves. And then when the project is done, re-enable the power saving CPU functions so its not running idle at 2.6Ghz and creating unnecessary heat and power consumption. |
|