Birdfeeding
Jun. 6th, 2025 03:31 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I fed the birds. I've seen a mixed flock of sparrows and house finches.
I put out water for the birds.
EDIT 6/6/25 -- I did a bit of work around the patio.
EDIT 6/6/25 -- It rained off and on today.
A letter to M. But I am being sleazy and posting here instead of writing a fresh post
I suppose that I am misusing your writings. But I suppose that is a occupational hazard for any writer so there is no need for anything other than a pro forma apology.
Mostly I have been pondering your thoughts on consciousness vis-a-vis the current foofooraw around AGI. Not that I think that artificial intelligence is anything but an oxymoron. But that it just me being a cynical old man, I also think that even among those of us referred to laughingly as Homo Sapiens (it is a bit of stretch to use the Latin word Sapiens (wise) in the official descriptor) as being intelligent the bulk of the time as being of questionable provenance.
The advertising term AI is here to stay, and it isn't going to be going away anytime soon. I suppose that I am trying to get my head around just what it can do and what those abilities mean to the already fucked up society that we inhabit.
Mostly, I think that it is going to me an anime-style search engine that will go through the low-level customer service industry like shit through a goose. But these positions have never had an intelligence requirement attached to them. They are there to make the customer happy and follow rules to do this activity at the lowest possible level of cost and compliance.
I think that AI will allow the corporations and their willing minions in the professional/managerial class to further marginalize the lower tiers of the economic scale. But this will only be the first step. The greedy geckos that populate the professional/managerial class will exhaustively work to catalog any way possible to use this fixed cost replacement for anything that can remotely be within its capabilities (Almost any entry level position IMHO). I see this lasting about 20-25 years as the technology improves at which time it will begin eating into the PMC itself.
So, I got an interesting response from a reader concerning my recent rant on AI and robots and old science fiction. The part that raised some questions was:
Isaac Asimov's Three Laws of Robotics are a set of guidelines for the behavior of robots, designed to ensure their interaction with humans is safe and ethical. They are: 1) A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The response from this reader was:
Law 1 is hugely problematic. Just think of all the 'hate laws' being pushed at the moment. What is 'harm'? And what if stopping a human being coming to harm requires harming them?
Yep. He has got it right. But then again, you have to think a little past that. About those laws, what they are trying to do, who is doing them, and the culture that promulgated them.
Consider for a moment the “trolley problem” presented above. “Holy Kobiashi Maru Batman!” This tired conundrum is trotted out and undergraduates preen and strut with their tired ass rationales.
But I think that this kind of thing is exactly what worries my gentle reader who pointed out the dilemma. Our society really can’t stand the idea of “you’re damned if you do and you’re damned if you don’t”.
The simple and unsophisticated presentation of the trolley problem is one where the mental/physical states of the person operating the switch and the victims on the tracks are unknown. This is both simplistic and stupid.
Imagine you own petty bigotries and problematic actions (and please don’t think they aren’t there) and then imagine that you knew the identities and mental states of the “victims” on the track. Now you have a real problem don’t you?
What if the “one” is your daughter? I would venture to guess that there would be five dead people at the end of the experiment. What if you knew that four of the five had terminal disease and would die in a week, would the change in death timing mean anything to you?
Let’s use an imaginary “Harry Potter” scenario but with no “magic” to help you out. What if the “one” was Sweet Hermione and the “five” were mean-old Slytherins and you were a Hufflepuff? Maybe a different answer depending on your house. I am certain members of Gryffindor and Slytherin would not take much time to make their respective choices.
The Robots and intelligences that we are trying to make will be a different hodgepodge of conflicting goals, prejudices, compromises and methodologies that make up our laws. But at the end, the rules coded into them will be our rules because we did the coding. The chance that they can come up with a solution that will make everyone happy is exactly zero.
My solution to the trolley problem is that I would walk away. If there is no way to win, don’t play. Maybe that is what we need to teach.
How do you dry your hair?
air dry
45 (77.6%)
towel dry roughly
21 (36.2%)
towel dry carefully / squeezingly
18 (31.0%)
hair dryer or other device
13 (22.4%)
other
0 (0.0%)
not applicable
1 (1.7%)
add styling stuff
10 (17.2%)
add conditioning stuff
10 (17.2%)
add anti-frizz stuff
7 (12.1%)
other
1 (1.7%)
ticky-box of other people are, generally speaking, quite mysterious
21 (36.2%)
ticky-box full of poll votes
17 (29.3%)
tickybox full of a yawning cat broadcasting calm and satisfaction into the world
41 (70.7%)
ticky-box full of the tickly froth edge of a wave on pale sparkly sand, at dawn
29 (50.0%)
ticky-box of rationing your exclamation marks
14 (24.1%)
ticky-box full of hugs
39 (67.2%)