LinkedIn Launches New Security Features To Combat Fraud
Popular professional networking app LinkedIn has released a new set of features to help the platform combat the likes of fraud and issues linked to spamming on the app.
The company says the new features can help gives users an added layer of security and protection such as better insights on new profiles, designs to secure users from scams, and also new tools which highlight AI-produced profile pictures.
For starters, there’s a feature called ‘About this profile’ that gives more insight into how profiles were generated, when was the last time it underwent updates, and so much more.
Simply press on the three dots feature for the menu on your app and you’ll find all sorts of information about when a profile was made and more details about registered emails and phone numbers.
This makes it so much simpler to determine who is real and who is fake across the network so those looking to steal your sensitive details are prevented from doing so. This past month, a new and alarming report by MIT took into consideration the fact that the number of cases related to this ordeal was growing as we speak and it’s a huge concern for obvious reasons.
These scammers are using such connections to try and force users to invest in crypto scams through MIT. This noted how LinkedIn victims usually end up losing funds more easily due to the likes of fraud across different platforms.
In addition to that, in the past year, there were some separate databases comprising various details belonging to LinkedIn users. So many records of users were getting pitched across the dark web and when looked into closely, it was proven how information was never taken through practices like hacking. Instead, it was more related to matters such as data scraping.
LinkedIn was really looking at the outlaw through legal means for several years. Moreover, such scraping had to do with scammers taking data by linking with users on the application. They then took access to gain more and more data related to them and their networks.
The huge prevalence and issues on the matter have led to the platform really taking some necessary steps to better the security of the app and this is where this new update stems from. With measures such as these, you can better understand who it is that you can connect with and those you shouldn’t be linking in the first place.
Then there is plenty of talk about LinkedIn really working hard to get its AI models and machine learning back on track to better detect aspects like profile pictures that were made using the AI image application.
LinkedIn released a new report in this regard about how it is really working hard and using state of art and cutting technology to ensure all images uploaded were done so authentically and don’t have any malicious intentions behind them.
Verification through means like facial recognition and even biometric analysis is being considered. And they hope to use this for the sake of enhancing effectiveness in the anti-abuse defenses so fake accounts get removed with ease.
With time, this is definitely going to be a huge problem because AI image generation would be making it hard for people to realize what is real and what’s fake. Similarly, they’re going to make it hard for the system to differentiate between the two.
Last but not least, the app says it’s going to be adding new warning alerts to users’ DM to give them a heads-up about content that may be harmful to view. This keeps them aware of any unwanted surprises coming their way.