Software Development

Explore top LinkedIn content from expert professionals.

  • View profile for Beth Kanter
    Beth Kanter Beth Kanter is an Influencer

    Trainer, Consultant & Nonprofit Innovator in digital transformation & workplace wellbeing, recognized by Fast Company & NTEN Lifetime Achievement Award.

    521,991 followers

    This Stanford study examined how six major AI companies (Anthropic, OpenAI, Google, Meta, Microsoft, and Amazon) handle user data from chatbot conversations.  Here are the main privacy concerns. 👀 All six companies use chat data for training by default, though some allow opt-out 👀 Data retention is often indefinite, with personal information stored long-term 👀 Cross-platform data merging occurs at multi-product companies (Google, Meta, Microsoft, Amazon) 👀 Children's data is handled inconsistently, with most companies not adequately protecting minors 👀 Limited transparency in privacy policies, which are complex and hard to understand and often lack crucial details about actual practices Practical Takeaways for Acceptable Use Policy and Training for nonprofits in using generative AI: ✅ Assume anything you share will be used for training - sensitive information, uploaded files, health details, biometric data, etc. ✅ Opt out when possible - proactively disable data collection for training (Meta is the one where you cannot) ✅ Information cascades through ecosystems - your inputs can lead to inferences that affect ads, recommendations, and potentially insurance or other third parties ✅ Special concern for children's data - age verification and consent protections are inconsistent Some questions to consider in acceptable use policies and to incorporate in any training. ❓ What types of sensitive information might your nonprofit staff  share with generative AI?  ❓ Does your nonprofit currently specifically identify what is considered “sensitive information” (beyond PID) and should not be shared with GenerativeAI ? Is this incorporated into training? ❓ Are you working with children, people with health conditions, or others whose data could be particularly harmful if leaked or misused? ❓ What would be the consequences if sensitive information or strategic organizational data ended up being used to train AI models? How might this affect trust, compliance, or your mission? How is this communicated in training and policy? Across the board, the Stanford research points that developers’ privacy policies lack essential information about their practices. They recommend policymakers and developers address data privacy challenges posed by LLM-powered chatbots through comprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default. “We need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought." How are you advocating for privacy-preserving AI? How are you educating your staff to navigate this challenge? https://lnkd.in/g3RmbEwD

  • View profile for Sandra Sanz

    Head of Product Design | Building AI-powered apps

    3,234 followers

    Apple just quietly broke the rules of the App Store. Almost no one noticed. But if you’re building products, especially with AI, you should. On November 13, Apple launched the Mini Apps Partner Program. No keynote. No hype. Just a quiet update on the Developer News page. But the implications are big: - Developers can now build lightweight mini-apps (HTML/JS) that run *inside* larger iOS apps - Apple lets you keep 85% of qualifying purchases, the best deal they’ve ever offered - High-traffic apps can become platforms, hosting mini-apps and building their own ecosystems - Why this matters: - Distribution changes: instead of fighting for downloads, your mini-app lives where users already are - SaaS goes embedded: CRM, calculators, tools and workflows move inside bigger apps - AI fits perfectly: LLM-powered micro-utilities can live as focused, in-context mini-apps We’re early. It’s invite-only, hosts are curated, and the tooling layer (the “Shopify for mini apps”) doesn’t exist yet. But the direction is clear: The App Store is no longer just a store. Every app can become an ecosystem. Every developer can be a platform partner. If you’re building products, especially with AI, this is your signal to start thinking modular and embedded.

  • View profile for Romuald Czlonkowski

    AI Implementation Practicioner & Advisor | n8n-mcp.com founder (75k+ users)

    8,357 followers

    What happens when you open-source a tool you built for yourself? 10,000+ developers started using it. I created n8n-MCP - a tool that enables AI Agents such as Claude Desktop or Cursor to actually build working n8n workflows. I simply wanted to solve my own problem. The decision to share it publicly led to something I never expected. 🌍 The numbers tell an interesting story: • Docker images downloaded 10.1k times • npx installations: 4.6k • Repository cloned 2.6k times by 1.7k unique developers • Nearly 40k views, averaging 3k unique visitors daily • 1.7k GitHub stars ⭐ • 4 independent creators discovered the tool and created YT tutorials But beyond metrics, what truly amazes me is how the community adapted the tool for their own needs. From solo developers running local instances to entire teams deploying it remotely, each found their own use case. The tool is valued as "the first one that actually works" and "best on the market" The experience taught me a valuable, yet obvious lesson. When you have deep knowledge in certain domain and solve well particular pain point in this domain, we often solve problems we didn't know others had too. 💡 Open source isn't just about code - it's about creating tools that multiply possibilities across the community. Sometimes the most impactful contributions come from simply scratching your own itch and having the courage to share it. Have you tried it yet?

  • View profile for Jennifer Fruehauf

    Customer and Transformation Leader | Driving Enterprise Growth, Loyalty and Commercial Value at Scale | Speaker | Ex-Salesforce

    3,687 followers

    There has never been a better time to be a woman in tech. Wait. You're wondering if I've seen the same reports as you. I have. And they're dismal. According to an article published in The Times by Martha Lane Fox, she was told earlier this year by a US tech CEO that the industry "is done with women". She also goes on to note that the percentage of women in tech has not improved in 30 years. Available data suggests this percentage actually dipped in the 1990s through the 2010s and although it has improved somewhat since then, it still remains lower today than it was 30 years ago. Now, the tech industry is going all in on AI, with very few women with a seat at the table to influence what is developed, why it is being done, and how. And the same biases we’ve faced are being embedded into systems and amplified in ways that will fundamentally reshape our society. This is precisely why we need more women in AI, and why now is the right time. The stakes are simply too high for AI to be developed and controlled by a narrow few. If we don’t act now, while AI is still in its early phase of expansion, we risk being locked out of what is perhaps the most transformative technology of our generation. #womenintech #womeninai #responsibleai

  • View profile for Ethan Evans
    Ethan Evans Ethan Evans is an Influencer

    Former Amazon VP, sharing High Performance and Career Growth insights. Outperform, out-compete, and still get time off for yourself.

    168,769 followers

    In 2011, the Amazon Appstore failed on launch and Jeff Bezos was furious. It was my fault, and I handled one aspect of recovery so poorly that one of my engineers quit. I still regret it 14 years later. Please learn from my mistake. The main lesson is that when you are leading through a crisis, it can feel like it is all about you. It isn’t. It is about: 1) Solving the problem 2) Guiding your team through it The product issue was that there were some pretty simple bugs, and we solved those problem well enough that I was eventually promoted. Where I failed was in guiding my team through the crisis. My leadership miss was that I neglected to encourage and support the engineer who had written the bad code. He did a great job stepping up and supporting the effort to fix the problem, but shortly afterward, he resigned. During the crisis, I failed to make clear to him that we did not blame him for the launch failure despite the bugs. I imagine that left room for him to think we blamed him or that he didn’t belong. It is also possible that others did blame him directly and that I was too caught up in the crisis to realize it. Both instances were my responsibility as the leader of the team. His resignation taught me a valuable lesson about leading through a crisis: No matter how bad the situation is, your team must be your first priority. If you make them feel safe, they will move heaven and earth to fix the problem. If you don’t, they may still fix the problem, but the team itself will never be the same. As a leader, here is how you can give them what they need: 1) Take the blame and do not allow others to be blamed. In some bug cases after this we did not release the name of the engineer outside the team in order to protect them from judgment or blame. 2) Separate fixing the problem from figuring out why it happened. Once the problem is fixed, you can focus on root-causing. This lowers the risk of searching for answers getting confused with searching for someone to blame. 3) Realize that anyone involved in the problem already feels bad. High performers know when they have fallen short and let their team down. As a leader you have to show them the path to growth and success after the crisis. They do not need to be beaten up on- they have taken care of that themselves. 4) See crises and problems as growth opportunities, not personal flaws. Your team comes with you in a crisis whether you like it or not, so you might as well come out stronger on the other side. As a leader, the responsibility for a crisis is yours in two ways: The problem itself and the effect it has on the future of the team. Don’t get too caught up in the first to think about the second. Readers- Has your team survived a crisis? How did you handle it?

  • View profile for SHAFIQ UR RAHMAN

    Co-Founder @ Tyaari | 10X Your Business with AI & Automation | Innovating in Edtech and Generative AI | AI Edtech SAAS | AI-Augmented ERP Solutions

    25,244 followers

    Open source isn’t free. Someone else just paid the price. Behind every great library is often a tired, unpaid maintainer. Let’s talk about the tools we all rely on: ➡️ Java apps run on Spring Boot, Maven, Log4j. ➡️ Python apps lean on NumPy, Pandas, TensorFlow. ➡️ Web apps, APIs, analytics, and AI models depend on them. Most of these are maintained by developers working late nights, weekends, and holidays — without compensation. A handful of contributors are carrying the weight of global infrastructure. Big companies use these tools at massive scale… but rarely give back. ➡️ One bug in Log4j created a global security crisis. ➡️ A NumPy issue could break critical scientific apps. Meanwhile, maintainers handle security patches, bug reports, support questions, and community drama — often alone. Why does this matter? Because your code, models, and servers wouldn’t run without them. ⚠️ Burnout is real. ⚠️ When maintainers leave, critical software becomes vulnerable. How to support open source: ✅ Contribute — fix bugs, improve docs, add tests. ✅ Sponsor — via GitHub Sponsors, OpenCollective, or direct donations. ✅ Promote — highlight their work in talks, blogs, and posts. ✅ Be thoughtful — don’t treat open source like free labor. ✅ Say thanks — a kind message or tweet can mean the world. Next time you import or build, remember: There’s a human behind that code. Let’s support them. #OpenSource #DevCommunity #SoftwareEngineering #SupportMaintainers #TechResponsibility #BurnoutIsReal

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,116 followers

    🌳 Design Patterns For Building Trust. With practical guidelines for designers on how to make products — AI and non-AI — more trustworthy, reliable and honest. In the noisy and polluted world today, trust doesn’t come for free. It doesn’t emerge by default. It must be earned and meticulously preserved — by being reliable, accountable and treating customers with respect. This holds true for people but it also for software. According to Anyi Sun, there are 5 psychological foundations of user trust: 1. Reliability 🔰 The degree to which the product consistently behaves as expected. It's a sense that that the product is dependable — based on a track record of past actions. Reliability comes from promising what you do, and doing what you promised. 2. Technical competence ⚡ Perceived intelligence, sophistication and capability of the product. It's user's belief that the product can successfully perform what they are being trusted to do. It's about trusting product's capability. 3. Understandability 🧠 The extent to which users feel they can understand how the system works or why it made a certain decision. The product must be able to articulate how a decision came along, with references to fragments that underpin a decision. 4. Faith and Care 🌱 Emotional, almost "blind trust" in the product, especially when users don't understand the underlying logic. It's a belief that the trusted party actually cares about the positive outcome for you, and intends to do good. 5. Personal attachment 🌳 A sense of rapport, connection or emotional engagement with the product. Typically it emerges when a user feels that they get meaningful value from the product, and from interactions with people supporting it. Personally, I would also add the value of repeated positive experiences that build confidence in the quality of the product, and hence its reliability. --- With AI products, hitting all these psychological foundations is extremely hard. Surely some people trust AI almost instinctively, others are more critical. But people's attitude often changes dramatically once they realized that they've made severe mistakes because of AI. Recovering from it is very hard. We can help with some design patterns: 1. Avoid "Ask me anything" → push for scoping and constraints 2. Slow down users in prompting → request specific details 3. Present multiple viewpoints, explain that experts disagree 4. Allow users to manage “memory”, profiles personalization 5. Highlight what is AI-generated and what isn't (AI disclosure) 6. Allow users to override AI-generated suggestions manually 7. Allow users to tweak AI output and refine it for their needs 8. Adapt AI's tone depending on the severity of user's task Trust is why people stay or leave. It builds long-term loyalty and helps users overcome hesitation. But it must be designed and retained — across all psychological foundations and with thoughtful UX work. I think designers will be quite busy for years to come. #ux #design

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,053 followers

    Isabel Barberá: "This document provides practical guidance and tools for developers and users of Large Language Model (LLM) based systems to manage privacy risks associated with these technologies. The risk management methodology outlined in this document is designed to help developers and users systematically identify, assess, and mitigate privacy and data protection risks, supporting the responsible development and deployment of LLM systems. This guidance also supports the requirements of the GDPR Article 25 Data protection by design and by default and Article 32 Security of processing by offering technical and organizational measures to help ensure an appropriate level of security and data protection. However, the guidance is not intended to replace a Data Protection Impact Assessment (DPIA) as required under Article 35 of the GDPR. Instead, it complements the DPIA process by addressing privacy risks specific to LLM systems, thereby enhancing the robustness of such assessments. Guidance for Readers > For Developers: Use this guidance to integrate privacy risk management into the development lifecycle and deployment of your LLM based systems, from understanding data flows to how to implement risk identification and mitigation measures. > For Users: Refer to this document to evaluate the privacy risks associated with LLM systems you plan to deploy and use, helping you adopt responsible practices and protect individuals’ privacy. " >For Decision-makers: The structured methodology and use case examples will help you assess the compliance of LLM systems and make informed risk-based decision" European Data Protection Board

  • View profile for Rajya Vardhan Mishra

    Engineering Leader @ Google | Mentored 300+ Software Engineers | Building High-Performance Teams | Tech Speaker | Led $1B+ programs | Cornell University | Lifelong Learner | My Views != Employer’s Views

    113,624 followers

    Dear Software Engineers, If your app serves 10 users → a single server and REST API will do If you’re handling 10M requests a day → start thinking load balancers, autoscaling, and rate limits /— If one developer is building features → skip the ceremony, ship and test manually If 10 devs are pushing daily → invest in CI/CD, testing layers, and feature flags /— If your downtime just breaks one page → add a banner and move on If your downtime kills a business flow → redundancy, health checks, and graceful fallbacks are non-negotiable /— If you're just consuming APIs → learn how to handle 400s and 500s If you're building APIs for others → version them, document them, test them, and monitor them /— If your product can tolerate 3s of lag → pick clarity over performance If users are waiting on each click → profiling, caching, and edge delivery are part of your job /— If your data fits in RAM → store it in memory, use simple maps If your data spans terabytes → indexing, partitioning, and disk I/O patterns start to matter /— If you're solo coding → naming things poorly is just annoying If you're on a growing team → naming things poorly is a ticking time bomb /— If you're fixing bugs once a week → logs and console prints might do If you're running production → you need structured logs, tracing, alerts, and dashboards /— If your deadlines are tight → write the simplest code that works If your code is expected to last → design for readability, testability, and change /— If you work alone → "it works on my machine" might be fine If you're in a real team → reproducible builds and shared dev setups are your baseline /— If your app is new → move fast, clean up later If your app is in maintenance hell → you now pay interest on every rushed decision People think software engineering is just about building things. It’s really about: – Knowing when not to build – Being okay with deleting good code – Balancing tradeoffs without always having all the data The best engineers don’t just ship fast. They build systems that are safe to move fast on top of.

  • View profile for Usman Sheikh

    I co-found companies with experts ready to own outcomes, not give advice.

    56,127 followers

    Stagnation kills careers faster than failure. Phase transitions ensure survival. Most professionals perfect one skillset and stay put. Top performers follow a different pattern. They master cycles of convergence and divergence. With AI accelerating skill decay, clinging exclusively to one mode - specialist or generalist - is no longer limiting. It's dangerous. Thriving requires mastering phase transitions: → Knowing when to deepen your skills → Recognizing when to broaden your perspective This separates thriving from stagnating. When I look back at my own journey, it's been a deliberate roller coaster of experiences: → Learning to sell when I couldn't afford a sales team → Learning to design when our product lacked polish → Learning to write when my message wasn't landing → Learning to lead when talent became our bottleneck → Learning to invest when building wasn't enough The key wasn't what to learn, it was when to shift. Convergence built my expertise. Divergence created my resilience. Phase transitions between them unlocked my growth. This pattern isn't unique to my experience. The Convergence Trap Most careers start with specialization: → The market initially rewards deep expertise → Professionals double down on what works → Specialization creates early career velocity → Recognition follows mastery But specialization becomes a trap. Experts become commodities. Skills depreciate rapidly. Disruptions render specialties obsolete. AI replicates what took decades to master. The Divergence Dilemma Others swing too far toward breadth: → They chase every new trend and technology → They collect skills without integration → They spread themselves too thin → They become perpetual beginners, masters of none This path is equally dangerous. Focus dilutes. Credibility suffers. Impact remains surface-level. Strategic opportunities requiring expertise are missed. Stay too long in either state, and stagnation is inevitable. Neither path sustains growth without phase transitions. The transitions themselves - those moments of deliberate change - are where breakthroughs emerge. Two practices which have worked for me: 1. Listen for whispers, not screams You know when you've hit a ceiling. Don't suppress that feeling. Listen to it. Most wait until their career stalls. By then, it's too late. 2. Set explicit transition triggers I tell myself: "When X happens, I'll diverge." Making this decision in advance is crucial. Yes, stepping away from what you're good at hurts. But it makes space for discovering what you're great at. The difference between stagnation and growth isn't luck or talent. It's your willingness to recognize phase transitions before they're forced upon you. You have a choice: Keep optimizing for what worked yesterday, or master the phase transitions that shape tomorrow. Your career isn't defined by what you know. It's defined by when you choose to evolve.

Explore categories