NATO Phonetic Converter Macro (v11.0.3)

NATO Phonetic Converter Macro (ChatGPT made AppleScript) (v11.0.3)

I see @tiffle already covered this topic 4.5 years ago using JavaScript: Replace Clipboard with its NATO Phonetic Representation Macro (v9.0.5)

This version is in AppleScript and the input and output are handled differently by presenting a dialog to input the words to convert and outputting them on screen. It shares the action of copying the result to the clipboard as @tiffle's version does. Not world shattering differences.

The reason I'm posting here it is to say that this macro is nothing more than a single AppleScript AND ChatGPT wrote it from beginning to the end.

What I have found in the few AppleScripts I've used ChatGPT to write is that it has usually taken a minimum of six iterations to get it right. It's proving a useful tool for me to get somethings done that would otherwise take a lot more time and effort.

If you use this macro, please note all the usual cautions. This has not been tested much at all so I can not vouch for its reliability.

21)NATO Phonetic Converter.kmmacros (4.3 KB)

1 Like

Send ChatGPT back to school!

Spawning a new shell instance to re-case each and every character individually is horribly inefficient -- and, since AS's text matching is case-insensitive by default, totally unnecessary.

1 Like

I don't know much about JSON, but I learned some while writing my rocket launch tracking macro. And I thought this might be a good chance to use it.

This macro takes any input A-Z or a-z, uppercases it, then looks it up in a JSON variable that holds the phonetic alphabet. If nothing else, it's a pretty easy-to-follow example of how to use JSON to store and retrieve values.

(Being a rookie JSON user, I imagine there are better ways to structure what I created…but it works :).)

21)NATO Phonetic Converter - JSON.kmmacros (6.9 KB)

A screenshot hides within

-rob.

1 Like

Not having the competence to code this myself or evaluate what got coded, I can’t address the poor job at coding ChatGPT did. For my purposes, it is good enough.

In posting this I’m not making claims for ChatGPT fitness as a coder. I trust what you said about the poor coding job ChatGPT did. As far as I understand, there is zero intelligence in the misnomer “AI.” I remain impressed by the coding done that achieves pattern matching good enough to allow me to get functional code out of it.

It's a start and it's questionable if it's the best use of our resources right now. A few billion sent to help LA right now might make more sense.

But you should always cast a critical eye over an LLM's answer -- just as you should with any of mine!

In this case, why is it changing case one character at a time? Even if it was necessary to properly match against natoList, wouldn't it be better to re-case the whole string in one go?

I know "AI"-generated code is the hotness right now, but don't forget to try a simple web search first! Putting applescript text to nato phonetic into your favourite search engine will give you plenty to go on, including some routines based on (very fast) string offsets. This function, for example:

on natoConvert(theText)
	set theChars to "abcdefghijklmnopqrstuvwxyz0123456789"
	set natoList to {"alpha", "bravo", "charlie", "delta", "echo", "foxtrot", "golf", "hotel", "india", "juliet", "kilo", "lima", "mike", "november", "oscar", "papa", "quebec", "romeo", "sierra", "tango", "uniform", "victor", "whiskey", "x-ray", "yankee", "zulu", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"}
	
	set outList to {}
	repeat with eachChar in (characters in theText)
		set thePosition to offset of eachChar in theChars
		if thePosition > 0 then
			copy item thePosition of natoList to end of outList
		else
			copy "[unknown]" to end of outList
		end if
	end repeat
	set AppleScript's text item delimiters to " "
	return outList as text
end natoConvert

...is much faster than the function ChatGPT suggested.

From my understanding, having a critical eye is a function of having certain distinctions. Regarding AppleScript, my distinction set is very basic. Trust me, if I have any distinctions or experience to bring, I’ll not hold back.

In this instance, I set the prompt and then just copied and pasted the error messages from the KM engine log back to the ChatGPT session without even looking at the code itself. After six rounds of that I had something that works. As it is not a mission critical task, used infrequently, and I have a wicked-fast Mac, it is more than enough for me.

What was interesting and useful was the explanations for the errors and explanations of the corrections that bracketed each round of debugging.

Until this forum becomes a let’s all get committed to teaching Bern how to code and be there for him immediately anytime he wants to do some coding for as long as he wants and have endless patients for all his smart and dumb questions which can wonder all over God’s creation and beyond, I’m taking what I can get.

No diminishment of what IS given here as it is the best there is online.

No. It didn't.

It restitched fragments of other peoples work without understanding:

  • that work,
  • your request, or
  • the token stream that it emitted.

Not writing. Meat reconstitution.

Caveat emptor, and look out for copyright squalls ahead.

3 Likes

It also misspelled a few letters (common error, but the following are correct):
A=Alfa
J=Juliett
X=Xray

1 Like

The Oxford Dictionary defines intelligence as "the ability to acquire and apply knowledge and skills", which the likes of ChatGPT do, at least in a broad sense of "acquire".

You might be thinking of the difference between weak AI and strong AI (the latter would be sentient AI), or perhaps that ChatGPT isn't... the sharpest tool (yet?).

To be fair, a lot of people are like that too. :laughing:

1 Like

It's true that what LLMs industrialise is bluffing and plagiarism,
but even bluffers and plagiarists understand a little of what they are doing,
and tend to add some (albeit modest) elements of added value.

Unlike LLMs they have lived in world, experimented a bit, and have developed some sense of how a stream of tokens (whether words or images) fits into the rest of human activity.

An LLM is just a clueless meat grinder – marketed by bluffers and plagiarists who want you to think that modelling syntax is the same as modelling understanding, and who hope, for the sake of their share price, that you will solemnly nod and agree that a tape-recording is just as useful and intelligent as the human it recorded.

2 Likes

I do listen to your point of view, even though I think it misses the point because whether or not you call AI "intelligent," it is extremely useful. It helps me every hour of every day. I can't exactly hire the original people whose thought processes enabled AI to do its work for me, but I can hire (or use for free) those LLMs on the Internet. What makes it a great invention is that it copies the thought processes of human-written text and then acts like the human. The original human can even be dead, and an LLM can still copy that human's thought processes, giving me the assistance I need.

I'm willing to concede that AI isn't able to "think," but are you actually saying AI is "functionally useless?"

I may be more on your side that you think. I have argued for ten years that self driving cars are likely a serious danger to our lives because they don't really understand what they are seeing around them. When I drive I can tell if a pedestrian is in distress, or if another driver is waving to me to let me in front of him, but self driving cars cannot do that at all because they don't understand the human condition. At least not yet.

People can at least be held accountable for their plagiarism. Corporations have been working (and spending) tirelessly to remain exempt from such accountability and continue their mass theft of human labour.

Except Apple. Apple says it has paid for all the text that its AI has learned from.

I wonder if they will continue to pay royalties to those same people after they start charging a subscription fee for their AI service. :confused: I agree that Apple is marginally better than many other companies in this regard (and others), but it's difficult to believe that a trillion dollar company is sincerely interested in ethics (or the environment or sustainability) over profits.

AlphaFold and AlphaZero model meaning within restricted domains, and furnish a capacity to solve problems in new ways. It makes some sense to call them "AI".

LLMs are not AI – they model syntax (the internal statistics of token streams) and, like a bluffer, can output syntactically plausible streams, without having any model of meaning, and without any capacity to solve problems. (Initial impressions to the contrary were deflated when the answers to aced exams turned out to be in their linguistic training data).

LLMs are interesting, and where what you need is plausible syntax, they can often provide. In the case of the languages I have tested them with (XQuery, XSLT, Haskell) the code they generate "looks plausible" but seldom even compiles. (Training samples probably too small, in those cases, but the Wolfram Benchmarking Project still shows little better than a coin toss on correct functionality, even for the latest LLMs).

On forums like this my main concern is enshittification – the danger that requests for help with problems will gradually be swamped by a slightly malodorous type of schnorring in the vein of:

"I pulled this code from a trash can – it smells a bit, and doesn't work, so could you just improve it ?"

Neither of us has seen the contract that Apple made with the New York Times for training Apple's LLM. Perhaps it includes a share of the future fees that Apple charges users. Why would Apple not pay them, say, a 10% fee of their future AI profit?

You can agree with that phrase, but what I'm claiming is that Apple is heaps better than many other companies. Can you name any other company that has paid for all its training data?

Worth looking at the history of this:

Policy: Generative AI (e.g., ChatGPT) is banned - Meta Stack Overflow.

No -- but Apple hasn't either.

From https://arxiv.org/pdf/2407.21075, which is linked to from Apple's own https://machinelearning.apple.com/research/apple-intelligence-foundation-language-models:

This includes data we have licensed from publishers, curated publicly-
available or open-sourced datasets, and publicly available information crawled by our web-crawler, Applebot

Where the hope of LLM suppliers is that "publicly available" will be mistaken for "public domain" or "copyright free".

1 Like