<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ai on Ryan P. Meyer</title><link>https://ryanpmeyer.eu/tags/ai/</link><description>Something my own.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@ryanpmeyer.eu (Ryan P. Meyer)</managingEditor><webMaster>hello@ryanpmeyer.eu (Ryan P. Meyer)</webMaster><copyright>© 2026 Ryan P. Meyer</copyright><lastBuildDate>Fri, 10 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://ryanpmeyer.eu/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>AI Written Content</title><link>https://ryanpmeyer.eu/topics/ai-written-content/</link><pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate><atom:updated>2026-04-10T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/topics/ai-written-content/</guid><category>topics</category><category>ai</category><description>[AI] written content is a very interesting thing in terms of content to consume. I do think it is and can be helpful to use it to fine-tune your own writings, but it can be a slippery slope on how much is just generated and posted.&amp;#xA;For example, using AI to write you TPS Report, probably fine; no one reads those anyway. On the flip-side, using it to write personal or engaging blog posts? Probably can be a bit soulless if you aren’t also heavily controlling the output.&amp;#xA;I’ve often heard “If you can’t be bothered to write it, then I can’t be bothered to read it.”&amp;#xA;</description><content:encoded><![CDATA[<p>[AI] written content is a very interesting thing in terms of content to consume. I do think it is and can be helpful to use it to fine-tune your own writings, but it can be a slippery slope on how much is just generated and posted.</p>
<p>For example, using AI to write you TPS Report, probably fine; no one reads those anyway. On the flip-side, using it to write personal or engaging blog posts? Probably can be a bit soulless if you aren&rsquo;t also heavily controlling the output.</p>
<p>I&rsquo;ve often heard &ldquo;If you can&rsquo;t be bothered to write it, then I can&rsquo;t be bothered to read it.&rdquo;</p>
<p>Or, another example, &ldquo;I used AI to take these 5 bullet points to make a five page report, that the recipient then used AI to turn back into 5 bullet points.&rdquo;</p>
]]></content:encoded></item><item><title>AI</title><link>https://ryanpmeyer.eu/topics/ai/</link><pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate><atom:updated>2026-04-10T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/topics/ai/</guid><category>topics</category><category>ai</category><description>Artificial intelligence can broadly refer to many technologies, from large language models and generative tools to actual science fiction A.I. like Skynet or that one Will Smith movie.&amp;#xA;The current wave of AI (roughly 2022–present) is dominated by transformer-based large language models that can generate text, code, and images. The capabilities continue to grow at a rapid pace. So are the risks: hallucination, deepfake-enabled scams, labor displacement, and the erosion of the ability to distinguish human from machine.&amp;#xA;Some interesting cases with AI, to me, involve [[Security]], [[Privacy]], and skill augmentation. How these tools change the threat landscape (AI-powered phishing, voice cloning, automated vulnerability discovery) and how we can use AI to defend against AI. The reshaping of what “trust” means when you can no longer assume a message, image, or voice is from a real person.&amp;#xA;</description><content:encoded><![CDATA[<p>Artificial intelligence can broadly refer to many technologies, from large language models and generative tools to actual science fiction A.I. like Skynet or that one Will Smith movie.</p>
<p>The current wave of AI (roughly 2022–present) is dominated by transformer-based large language models that can generate text, code, and images. The capabilities continue to grow at a rapid pace. So are the risks: hallucination, deepfake-enabled scams, labor displacement, and the erosion of the ability to distinguish human from machine.</p>
<p>Some interesting cases with AI, to me, involve [[Security]], [[Privacy]], and skill augmentation. How these tools change the threat landscape (AI-powered phishing, voice cloning, automated vulnerability discovery) and how we can use AI to defend against AI. The reshaping of what &ldquo;trust&rdquo; means when you can no longer assume a message, image, or voice is from a real person.</p>
]]></content:encoded></item><item><title>Generation AI</title><link>https://ryanpmeyer.eu/posts/generation-ai/</link><pubDate>Mon, 30 Jun 2025 00:00:00 +0000</pubDate><atom:updated>2025-06-30T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/posts/generation-ai/</guid><category>posts</category><category>thoughts</category><category>AI</category><description>Raising the next generation.</description><content:encoded><![CDATA[<p>I was recently talking with some friends and we came onto an interesting subject regarding the newest generation of children. We were commenting on the fact that we now know people that are solely using AI chatbots to help raise their kids. &lsquo;Oh my kid has a fever and isn&rsquo;t eating what do I do?&rsquo; &lsquo;When my kid is teething how do I stop them from biting things?&rsquo; etc.</p>
<p>As we all know, people like to categorize things, and this extends to grouping people based on certain characteristics of when they were born. The obvious ones are: &ldquo;Baby Boomers&rdquo; mainly based on the fact that there was a Boom in babies during that period. &ldquo;Millennials&rdquo; because they were at the turn of the millennium. While other groups are stuck with their holdover name like &ldquo;Gen X&rdquo;, &ldquo;Gen Z&rdquo;, and &ldquo;Gen Alpha.&rdquo;</p>
<p>I&rsquo;d like to propose that the next generation being born and raised right now, which I think is &ldquo;Gen Beta&rdquo; will be known as &ldquo;Generation AI&rdquo; (aka &ldquo;Gen AI&rdquo;) because of the proliferation of AI tools becoming the new search engine.</p>
<p>In the past there were books, and group meetings to raise kids. Then people started googling and  moving online, but there was usually other people grouped up in these sites. Now, though, you can do all that with AI. Asking AI nuanced questions and getting answers is the strength of AI.</p>
<p>So, now we are going to start seeing kids being raised by parents that will get a vast majority of their support through AI. Now, I am not going to make any sweeping thoughts on if this will be good or bad, but I do think it is interesting. Beyond to being &ldquo;raised by AI&rdquo;, this next generation will also have AI all through their life, similar to how &ldquo;Gen Alpha&rdquo; is a post-internet generation.</p>
<p>Anyway, I just wanted to make this post because I don&rsquo;t think I have seen anyone try to coin the term for the next generation, and I found it interesting.</p>
]]></content:encoded></item><item><title>Thoughts on Vibe Coding</title><link>https://ryanpmeyer.eu/posts/thoughts-on-vibe-coding/</link><pubDate>Wed, 07 May 2025 00:00:00 +0000</pubDate><atom:updated>2025-05-07T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/posts/thoughts-on-vibe-coding/</guid><category>posts</category><category>Thoughts</category><category>AI</category><category>Vibe</category><category>Coding</category><description>Some thoughts on the vibe coding concept.</description><content:encoded><![CDATA[<p>I&rsquo;ve been thinking of the concept (idea? methodology?) of vibe coding. It was only a recently coined term from February 2025, based on <a href="https://en.wikipedia.org/wiki/Vibe_coding">Wikipedia</a> .  Roughly, it is the use of AI tools to generate functional code without fully understanding or questioning it. You, as a &ldquo;Vibe Coder&rdquo;, would, in its simplest form, act as a Project Manager of sorts and speak to what you want. The tools would generate implementation guides, implementation plans and the various code iterations until you have a functional program. One of the key elements of this process, and it seems like the degree of this is up for debate, is that you do as minimal code understanding or reviewing as possible - basically if the code works it ships.</p>
<p>Now, as it is a new concept I do expect it to continue to evolve, and the processes around it to become a bit more concrete. But I have some thoughts on some aspects and the overall concept of how it might play out.</p>
<h2 id="vibe-coding-vs-googling">vibe coding vs googling</h2>
<p>Let&rsquo;s start with the obvious, what this is replacing. Before vibe coding started taking off, one of the most common ways to &ldquo;generate&rdquo; code was to use google.  You&rsquo;d often end up in places like StackOverflow where you might find a snippet close to what you were trying to do. You&rsquo;d then copy and paste this &ndash; and it doesn&rsquo;t work! Because you still haven&rsquo;t updated it to work with your actual code. You still have to do a bit of critical thinking with how the code actually works and how you need to update it to make it work for you.</p>
<p>In most cases, this isn&rsquo;t seen negatively. You simply need a function and get a boilerplate that is 90% of the way there. You update it. And it works.</p>
<p>Now sometimes copied code &ldquo;works&rdquo; out of the box and you can straight up paste it and run it without thinking. This is what I think early vibe coding was without the AI wrapper we are seeing now. And that is usually when people start running into trouble, because they didn&rsquo;t really analyze how the code works and down the line it could end up being the thorn in the lion&rsquo;s paw.</p>
<p>What the function is doing, if it is inefficient, or could even created vulnerabilities &ndash; this wasn&rsquo;t considered by the developer. But, hey, it works, so all is good! &ldquo;That&rsquo;s a problem for future me&rdquo; is what many would say to justify this.</p>
<h2 id="vibe-coding-vs-no-code-solutions">vibe coding vs no code solutions</h2>
<p>Where I can see a strong use case of vibe coding is as a no-code solution. There are already many tools out there that  position themselves as low-code, visual-coding or no-code - and now we can add vibe code to that mix. In all intents and purposes you may not even need to see the code when you are &ldquo;truly&rdquo; vibe coding, so services like <a href="https://lovable.dev/">https://lovable.dev/</a> already are pushing for this &ldquo;no need to view the code&rdquo; mentality. Compared to <a href="https://www.cursor.com">https://www.cursor.com</a> which is more adjacent to helping you with the code, but you can still just have it make something and run  with it.</p>
<p>My opinion here is that a blind box solution can work but only in highly scoped solutions, but the real winner will be the more open-box solutions.  There still will be a need to understand some concepts of programming - but that need is getting smaller all the time.</p>
<h2 id="dabbling-in-vibe-coding-at-a-ctf">dabbling in vibe coding at a ctf</h2>
<p>I would be a bit amiss to not including how I&rsquo;ve dabbled in the idea of vibe coding.</p>
<p>There was a CTF I was at recently where, as with most hacking or red-teaming, anything goes. To that effect, I wanted to focus on what commercial AI models could do when it came to some of the challenges.</p>
<p>If you&rsquo;ve not participated in CTFs (this was my first real life event) before, they are usually a mixed game of knowledge challenges, riddles and puzzles. Usually there are clues in the name or description of the challenge on what might be the way to complete it or where the flag might be. From there it is up to you think about the approach and what exploits or vulnerabilities exist that could get you there.</p>
<p>Vibe coding these exercises worked extremely well with one major caveat, you still need some baseline knowledge to guide the AI to the right place or be able to discern when the AI might be going off track. Now, I&rsquo;ll admit that I am still a novice with CTFs, but by shifting to include AI in a way where I directed it what I needed, tested and reworked the code with it, I was able to successfully complete quite a few challenges.</p>
<p>At this point, I am fairly convinced that the tooling is at a point where this tooling is able to heavily accelerate the skills of threat actors to a point that adoption is required for those on the defense. I like to think of this as part of the evolution from one threat actor with one machine, to the many threat actors each with fleets of machines, to each threat actor to now a fleet of agents on a fleet of machines.</p>
<p>This has brought out a new layer of resources available to threat actors that need to be considered.</p>
<h2 id="dabbling-in-general">dabbling in general</h2>
<p>Additionally to trying a bit of vibe coding with the CTF, I have also found AI helpful when I need a quick script made in a language I am comfortable with, like python. I generally know what I need, how to phrase and and can review the code (yes, I am not the best vibe coder in that sense) if needed.</p>
<p>I have also managed to generate an, albeit janky, app built on JavaScript and html. It wasn&rsquo;t anything mind blowing, but it really does show that the generalist models, like Gemini 2.5 Pro, can handle complex functions and files. However, I want to reiterate that it still required a lot of coaching and realigning the AI to get it to stay on track.</p>
<p>Occasionally, once it got <a href="https://news.ycombinator.com/item?id=43688933">stuck on a loop of a problem it couldn&rsquo;t fix</a>, the best option was to have it generate a new handoff prompt and spin up a &ldquo;new&rdquo; chat and get that instance up to the same speed. I would bet that the dedicated tooling is much better, but I have not tested them just yet.</p>
<h2 id="vibe-coding-and-security">vibe coding and security</h2>
<p>One of the largest concerns about vibe coding is the quality of the security of the code. This is where I think a lot of the ethos misses the mark. Sure, you could just &ldquo;vibe out&rdquo; a cool new app, but if you aren&rsquo;t considering security in either a secure-by-design (which I guess that would mean to make sure the AI agent is prompted to consider it?) nor in code reviews (which is against the core idea of vibe coding), nor implementing any security testing &ndash; then this is just a disaster waiting to happen.</p>
<p>There are already <a href="https://nmn.gl/blog/vibe-coding-fantasy">instances</a> of this already happening. So, yeah, this isn&rsquo;t ready for primetime at its current state. It reminds me of when someone might spin up a VPS with LAMP and maybe Wordpress and then not put in any security controls around access to the server, and then within a few days it becomes compromised. It&rsquo;s not that it isn&rsquo;t a secure option - it&rsquo;s just that it wasn&rsquo;t considered correctly.</p>
<h2 id="pre-requirements-to-be-a-good-vibe-coder">pre-requirements to be a good vibe coder</h2>
<p>So what makes someone a good vibe coder? Or better yet, what makes good vibe coding? I think there is still a heavy need to actually understand what you are making - even if it is just handed to you and &ldquo;works.&rdquo; Beyond security concerns, there is just the ability to know what the code is doing and how it could be improved and what limitations it might have at that moment.</p>
<p>Will we get to a point that we don&rsquo;t need to look at the code? Probably, or at least close to it. But for now we should recognize the limitations, but understand the fact that this can help someone really accelerate their coding proficiency.</p>
]]></content:encoded></item><item><title>Thoughts on Deepseek's Impact</title><link>https://ryanpmeyer.eu/posts/thoughts-on-deepseeks-impact/</link><pubDate>Sat, 01 Feb 2025 00:00:00 +0000</pubDate><atom:updated>2025-02-01T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/posts/thoughts-on-deepseeks-impact/</guid><category>posts</category><category>AI</category><category>Thoughts</category><description>Exploring the Initial Impact of Deepseek AI</description><content:encoded><![CDATA[<p>It has been about a week since the most people became aware of Deepseek AI. Although Deepseek AI&rsquo;s r1 model initially released on the 20th went largely unnoticed, it gained attention after impacting the stock market the following week. During that week a lot has happened; from confusion to dismissals to speculations and concerns. I&rsquo;ve been asked by a decent number of people what I thought of this situation, which spurred me to write up my thoughts.</p>
<p>I must preface this by stating I&rsquo;m not an LLM expert; however, there is a lot of information to unpack. My aim is to highlight which aspects of Deepseek are crucial, worth considering now, and which remain unknown. I&rsquo;ll try to break this down into a couple of topics; how it affects the current AI incumbents, is it safe to use, what information we could accept or should be critical of, and how this can affect things in the future.</p>
<h1 id="deepseek-and-the-incumbents">Deepseek and the Incumbents</h1>
<p>This is most likely where most have initially heard of Deepseek, that it is a new Chinese built LLM that is comparable to the most powerful, available, models from places like OpenAI. Benchmarks are available to analyze how different models compare. However, many people simply want to know whether it&rsquo;s good, and in that regard, yes, it is.</p>
<p>A new model being better than an older one isn&rsquo;t really news worthy in of itself, but it is more so the cases of how it was made, where it was made and its availability.</p>
<p>Starting with how it was made; this was a side project from a Chinese investment firm which was able to make this model with around $6 Million dollars worth of investment. It is important to be critical of this number for various reasons, but we should not overly focus on it. Even if it cost $100 Million, it would still be significantly less than what it has cost to create the models it is competing with.</p>
<p>The next key fact of its creation is that China is under an embargo from getting the latest computer hardware, e.g. chips. Meaning this model was trained on older hardware. Again, it should be noted there is a consideration of &ldquo;<a href="https://www.404media.co/openai-furious-deepseek-might-have-stolen-all-the-data-openai-stole-from-us/">standing on the shoulders of giants</a>,&rdquo; in that some research and development has already been done, and they were potentially leveraging other available models to support the training of this model. Regardless, they proved that it was relatively cheap to create this model.</p>
<p>Lastly, the model was made under an open license: meaning (1) the model is free to use and modify; (2) you can use their website or app and run the model on their servers; (3) you can also download the model and run it locally. Essentially, this means that you now have the ability to run a model on par with the best available from OpenAI for a fraction of the cost.</p>
<p>So in short, this was made extremely cheaply, provided freely with the ability for others to modify it for their needs - <a href="https://www.404media.co/deepseek-mania-shakes-ai-industry-to-its-core/">showing that maybe the incumbents with their access to powerful hardware may not be as untouchable as initially thought</a>. Thus confidence fell, causing the stock market to react.</p>
<h1 id="is-it-safe">Is it safe?</h1>
<p>With that being said, is it safe to use? Well, it depends.</p>
<p>Starting with the hosted version, from their website or app. When it first came out I was wary of it, mostly because it is a new SaaS tool and hasn&rsquo;t really been reviewed or tested. So use at your own risk, and maybe don&rsquo;t ask it (read provide it) anything personal or private. That wariness proved to be warranted as <a href="https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak">security researchers have already found a fully accessible database that they were using to chats and logs</a>.</p>
<p>Additionally, Italy has begun pressuring them to review and adhere to various data processing laws. They did something similar with OpenAI&rsquo;s ChatGPT when it first came out, and was later sufficiently reviewed and approved. However, with Deepseek, they have taken the stance that they only seem to fall under Chinese data laws; as that is where their servers and company are located.</p>
<p>So, is it safe to use the app and website? I would say <strong>no</strong>, not until they start adhering to and proving that they are handling data securely. Now, I do also think that when you use ChatGPT, Google Gemini or any of the other hosted AIs, you should always take care to avoid providing more information than you feel comfortable with being used and stored by them.</p>
<p>Consider using DeepSeek locally instead. Running a <a href="https://medium.com/@ahmed.imtiaz2000/how-to-run-deepseek-locally-using-ollama-a-step-by-step-guide-14cd7b3feec5">large language model (LLM) locally</a>, without an internet connection, should offer more privacy since you control the computer processing the requests and have full control over it.</p>
<h1 id="being-critical">Being Critical</h1>
<p>There is a point of contention on the model&rsquo;s training data and censorship. The gist is, this model has censorships on things like the Tiananmen Square and other topics. Now, I think the topic of censorship is beyond this post, but I do want to say that all models are censored in some way; they will all have bias and you should always consider who built the model.</p>
<p>Furthermore, there are concerns about models being trained to provide seemingly harmless but actually malicious information. <a href="https://arxiv.org/abs/2401.05566">This is theoretically possible</a>. It is important to carefully examine the information provided by any model before taking action.</p>
<p>I have often taken the stance that you should treat an AI like an Intern, where they have a lot of time to get you answers on things or confirm things, but they can get things wrong. And they are heavily dependent on how you ask them to do something. In short, use them to augment and not replace you processes.</p>
<h1 id="the-future">The future</h1>
<p>In summary, DeepSeek has demonstrated that creating a powerful large language model (LLM) doesn&rsquo;t require massive hardware investments like OpenAI and others have made. This opens up the possibility of more players entering the field. However, it also highlights the importance of not blindly trusting new SaaS offerings with sensitive information until they have been thoroughly vetted.</p>
<p>For now, it might be prudent to avoid using the app until it has passed regulatory scrutiny in both the EU and US. Although these regions have their own biases, a consensus of reviews can help establish a baseline level of security.</p>
<p>We will probably see more models come out that are based on Deepseek, or trained in similar ways. This also applies to the big players, who should also benefit from this information to optimize their own training on top of having the most compute power to do even more.</p>
<p>Time will tell if this pushes to more availability of LLM models, or if the most powerful will still be closed behind big technology players.</p>
]]></content:encoded></item><item><title>AI doppelgänger Scams</title><link>https://ryanpmeyer.eu/posts/ai-doppelg%C3%A4nger-scams/</link><pubDate>Mon, 30 Dec 2024 00:00:00 +0000</pubDate><atom:updated>2024-12-30T00:00:00Z</atom:updated><author>hello@ryanpmeyer.eu (Ryan P. Meyer)</author><guid isPermaLink="true">https://ryanpmeyer.eu/posts/ai-doppelg%C3%A4nger-scams/</guid><category>posts</category><category>AI</category><category>Fraud</category><category>Scams</category><description>How Generative AI Improves Scammer Tools</description><content:encoded><![CDATA[<p>There’s been a few articles (<a href="https://www.npr.org/2024/12/24/nx-s1-5235265/how-to-protect-yourself-from-holiday-ai-scams">NPR</a>, <a href="https://www.wired.com/story/you-need-to-create-a-secret-passphrase-with-your-family/">Wired</a>)
recently regarding the use of AI and how it will provide new tools for scammers to better trick people by posing as someone they know. A lot of these articles  have very helpful techniques, but I want to cover how people could realistically adopt the techniques.</p>
<p>Because as we know, strong defenses are only useful if they actually get used.</p>
<p>First let’s start with making a scammers job harder to begin with.</p>
<h2 id="lockdown-accounts">Lockdown accounts</h2>
<p>This isn’t just make sure you have secure passwords and MFA - which you should have. Locking down your accounts means being aware of what you post, how someone could determine information about you, like where you are, problems you might have, interests they could tempt you with, and so on.</p>
<p>With AI, if there are videos or pictures of you - which I am sure there are of most people - then you could be cloned fairly easily. Honestly I don’t expect people to not post these types of things, which is why diligence is key.</p>
<p>Making accounts private, culling your followers, using multiple accounts to professional and personal interests. These can help minimize the potential knowledge scope on you if you are targeted.</p>
<p>Additionally, making sure you can recover your accounts and that your recovery email is well protected.</p>
<p>In short:</p>
<ul>
<li>Make sure your posts aren’t more revealing than you want them to be</li>
<li>Know who follows you</li>
<li>Take your account private by default</li>
<li>Consider multiple accounts to split between public and private</li>
</ul>
<p>This is something everyone should do, and you should do it on a cadence that makes sense. Because settings change, your tolerance for posting personal information changes and all that should be considered over time.</p>
<h2 id="set-up-strong-authentication">Set up strong authentication</h2>
<p>Get a password manager, randomize your passwords, and if possible randomize your emails too! On top of all that, set up MFA where possible. I suggest software plus hardware (like a YubiKey).</p>
<p>Also, if you are helping a parent or loved one, then a family account with the password sharing may help when needed. We have all been there when their parent can’t remember the password to an account they need your help with, and you have to go through the recovery dance.</p>
<h2 id="avoid-clicking-suspicious-links">Avoid clicking suspicious links</h2>
<p>This should also be an obvious one, but just don’t click links you don’t trust. These are links randomly texted to you, or just don’t look right (like fedex-com[.]net).</p>
<p>Now obviously clicking on links in social media is going to happen. Just make sure before you do click. And if you’re a bit suspicious of it. Google the website and the title of the article instead.</p>
<p>With that being said, suppose you’re targeted and a bad actor is realistically posing as someone you know. Maybe it’s a phone call, or a voice message, or even as sophisticated as a video call - all leveraging AI tooling.</p>
<h2 id="secret-family-codes">Secret family codes</h2>
<p>I see this being suggested a lot. And it can be helpful, but really it is a historical  knowledge challenge. So, realistically the best way to leverage this in your daily life is to ask questions only the two of you might know. Like a conversation you had previously or the last place you ate and what you ordered.</p>
<p>Obviously people are human, and I’m not the only one that forgets what I had eaten a day or two later. Also, as mentioned earlier, depending how open your life is on the internet someone might know what you ate last Sunday because you posted it on your stories.</p>
<p>Better yet would be to use intimate knowledge. These are things like inside jokes the two of you might have, personal discussions or other things that have a very low chance of being online.</p>
<p>Which is why creating a code phrase can still be helpful, as it is a better type of shared knowledge - intimate knowledge.</p>
<h2 id="secondary-channel-of-communication">Secondary channel of communication</h2>
<p>Another way to check if someone is who they say they are is to either ask them to send you the same message on a known secure form of communication you two share. This could be instagram DMs, signal messaging, or something where they need to authenticate to access. Or you could reach out yourself and not tell them you’re planning to do it.</p>
<p>Now this isn’t perfect either, especially if the bad actor has already compromised those accounts and has access.</p>
<h2 id="third-person-check">Third person check</h2>
<p>Now if you really aren’t sure how compromised that person might be. You can also reach out to someone else that is close with them to check. Depending on the time of day it could be a colleague, or a partner, someone you expect them to be with at that moment, have seen recently or will see soon. They can then work one physically finding the person to confirm if it is them. Or might be able to confirm the request.</p>
<p>Lastly is education, not only for yourself but others around you. Knowing not only who is aware, but also the extent they are capable of. The sad truth is, the most vulnerable are often targeted the most.    Knowing who you might have around that needs the extra help can make all the difference in the world.</p>
<p>Again, the best tools are the ones that are used, and the more in use the more layers of defense you have available.</p>
<h1 id="how-scams-are-changing">How scams are changing</h1>
<p>I want to reiterate that who this is most important for isn’t necessarily going to be the reader, nor most of their immediate friends and family. It will be the grandparents and older generations. Yes I do hope you learn something or use one of these techniques, but I also want you to be proactive.</p>
<p>There’s an event I was told of, which didn’t involve AI but is of the same vein of directed attacks. The scammer was watching a home of an elderly woman. Their adult children were visiting and some left for home. About 10 minutes later the woman received a phone call from the scammer, posing as one of the people that had just left, saying that the other was in trouble with the police and needed to pay a fine otherwise they were going to go to jail. Luckily the woman didn’t recognize the voice, and instead called someone else to check.</p>
<p>Now imagine if this same tactic added AI generated voice copies of the person they know? Just being that little more convincing could have lead to a successful scam. It’s these enhancements that people need to be prepared for.</p>
<p>As with everything in security, the bad actors only need to succeed once - so be diligent!</p>
]]></content:encoded></item></channel></rss>