I really searched a lot and also read quite a bunch of posts but didn't find something similar, so I "dare" to ask:
Since KM is able to OCR scan the screen: Wouldn't it be possible to have something similar like "Found Image Trigger" using OCR to work as "Found Text Trigger"? Maybe even a combination would be great and useful.
Of course the OCR result depends on the quality of the text, size used colors and contrast but that's the same as with images. If we had this option beside "Found Image…" IMHO it would enhance the options a lot to locate an area to work with mouse actions.
If I missed something I always appreciate some hints. Thanks!
Thanks a lot for your hint and so sorry but I mistakenly completely messed up with the terms. Indeed I use the "Found Image Trigger" in a specific scenario where it works as it should and I cannot recognize a high processor load.
But what I wanted to ask is like I changed the topic's name to. I would love if we could place the cursor also relatively to found text on the screen like we can do with images. It might be better in some situations where "Found Image" fails for in my case often not reproducible reasons or give just additional options.
You're probably right. Just a moment ago I found this post…
… where Sleepy seems to have gained something similar to what I am looking for. I'll keep an eye on this and when my "Monterey-equipped" machine arrives these days I maybe can investigate a bit more into it.
When I play games in Apple Arcade I turn on a macro which runs in an infinite loop which uses the Monterey OCR to read the screen continually and it does not impact game performance. The purpose of my Macro is to automatically click on parts of the screen when certain words appear. For example, whenever the words "click to continue" appear on the screen, my macro clicks on the screen. My macro has a user interface which lets me add new phrases and new click locations. I haven't released this macro to the public yet, but I probably should.
Also, I have another macro which does the impossible and locates the words on the screen using a binary search algorithm combined with OCR. It works fine, but even on an M1 Mac it can take a minimum of 5 seconds, sometimes as long as 30 seconds, to find the location of the words. (This is usually okay since finding locations usually has to be done only once, then it stores the location of the woods in a KM Dictionary so the next time you look, it doesn't take more than a second to confirm the location. Truly ingenious.) I haven't released this macro to the public yet because I don't think people on this site really want this. But since you are asking, I will see if the macro is fit for release.
I got all this working during the Monterey Beta. It works so well, this opens a new era for me in KM macro programming.
Thanks a lot for your explantation. It really sounds very interesting but also pretty complex to me. I am by far not the experienced scripting guy. Even though my KM library is pretty huge it consist of mostly simple things like mouse actions, shortcuts for sequenced tasks, palettes etc..
And my goal is to keep it at that level to be able to understand what is going on and maintain things if necessary. For example I don't want to rely on the help of others because a script or whatever complicated add-on is no longer working and I can't fix that on my own.
So please don't feel kind of obliged to share your macro just because of my interest.
I hope you don't get me wrong, I am not a native speaker/writer.