The U.S. cybersecurity experts: It provides 25 Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged or scanned.
Twitter gives a second chance to “potentially dangerous” users. It launched an experiment on iOS with a prompt that gives the option to revise replies before it’s published, if there is a language that could be harmful
Twitter gives users a second chance before publishing a potentially dangerous content. The social Network has launched an experiment on iOS with a prompt that gives the option to revise replies before it’s published, if there is a language that could be harmful. It has been announced by the company in a Tweet. Furthermore, some people on their iOS and web will see a new layout for replies with lines and indentations that make it clearer who is talking to whom and to fit more of the convo in one view. The platform is also experimenting with placing like, Retweet, and reply icons behind a tap for replies. At the moment, it’s a test“with a small group on iOS and web to see how it affects following and engaging with a convo”.
The Social Network platform is following the example of FaceBook with Instagram on bullying
Twitter is following the example of what FaceBook did with Instagram in the last December. The cyber security experts rolled out a new feature that notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting. The company developed and tested AI that can recognize different forms of bullying on Instagram. Furthermore, earlier in 2019, they launched a feature that notifies people when their comments may be considered offensive before they’re posted. According to a blog post, “results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance”.