Today: Visual Studio Code fueled Microsoft's decade-long enterprise winning streak, but new challenges loom, why Google and Microsoft are forcing you to use their AI tools, and the latest enterprise moves.
Visual Studio Code is a vital pieces of Microsoft's enterprise strategy, which banks on the goodwill developers have for both products to drive business to Azure and its other enterprise software products. But software development practices and preferences are changing rapidly.
AI use cases are becoming more powerful and pervasive across the software delivery lifecycle, but adopting any new technology comes with some risks. Nine members of our Roundtable discussed how technology leaders can reap the benefits of AI software-development tools while avoiding the pitfalls.
What are the best ways to implement AI tools like agents in the software development process?
AI use cases are becoming more powerful and pervasive across the software delivery lifecycle, but adopting any new technology comes with some risks. Nine members of our Roundtable discussed how technology leaders can reap the benefits of AI software-development tools while avoiding the pitfalls.
AI use cases are becoming more powerful and pervasive across all areas of business and the software delivery lifecycle, but adopting any new technology comes with some risks. Nine members of our Roundtable weighed in this month on how technology leaders can reap the benefits of AI software-development tools while avoiding the pitfalls.
Featuring: Rita KozlovCloudflareFrederic RivainDashlane Betty JunodHeroku from SalesforceAndrew SellersConfluent Tracy YoungTigerEyeSabrina FarmerGitLab Raj PaiGoogle CloudJared PalmerVercel Steve TackDynatrace
Rita Kozlov
VP, Product Management, Cloudflare
There are so many options to begin using AI in your codegen process; teams shouldn't shy away from AI, or you risk falling behind quickly. That being said, there are certain realities to keep in mind.
First, it's important that you be very aware of the potential security risks of sending your code to multiple third parties. You want to be able to experiment, and also keep security and privacy top of mind.
Second, most of the gains today are in "copilots" rather than true "agents" – there is still a human very close in the loop. Will that change in 2025? Maybe. But at the least your teams should get very comfortable with copilots and keep your eyes on anything that could likely work fully autonomously, for when the opportunity arises.
Frederic Rivain
CTO, Dashlane
Achieving broad internal adoption of copilot AI coding tools has been challenging. With the limitations and bugs of the first versions of AI copilots, we have found that there is a significant learning curve to optimize the usage of those tools, especially for junior developers. We have seen more promising applications for targeted use cases, such as enriching and facilitating code review.
For example, as a security company, Dashlane runs internally an AI agent that summarizes the context of code change to facilitate code reviews and provide recommendations related to security risks. Other use cases we are looking into include retro-documenting old code or helping with unit test code coverage. We are also cautious about generating poor code through AI, so in the short term, we are mitigating that risk by applying AI agents to specific and isolated parts of the code base.
One of the major challenges of GenAI generated code is that GenAI is somehow doing copy/paste based on what it has been trained on, and copy/pasting is a coding anti-pattern.
Sponsored answer
Betty Junod
CMO, Heroku from Salesforce
AI is everywhere and impacts every area of work and life. To go from idea to real value, avoid getting distracted by the rapidly evolving ecosystem and take a step back to identify outcomes aligned to your organization’s needs.
Solutions often fail when we focus on technology features over the problem that needs solving. Targeting use cases where the work is well understood, but challenging or cumbersome, is an ideal starting point. Outline the specific areas of the workflow that can be improved with an agent, the guardrails required, which part of the loop is driven by humans, and what metrics should be monitored over time.
The key with agents is to accelerate the productive time for your employees. To complete a job, an individual talks to multiple people and accesses a variety of systems and data sources. Applying agents will create these same dependencies. Taking a platform approach to your agentic strategy will help you consider these dependencies with your business and security requirements and how the job can be reimagined with humans and agents in the process.
Reliable automation has become the key differentiator in measuring the maturity of a software development organization. Leading DevOps and site-reliability practices speed up the software delivery lifecycle with application-specific code written by expert practitioners to ensure continuous delivery while minimizing incidents and outages. AI agents offer a promising path for teams to leapfrog this significant investment.
When implementing AI tools in the software delivery lifecycle, it’s best to start with well-understood, low-impact areas such as integration and testing. This approach ensures that processes are improved without risking customer-facing services. AI copilots are already improving developer productivity, demonstrating the potential for broader operational decision-making.
However, deploying AI in software delivery requires substantial infrastructure instrumentation and real-time monitoring capabilities. AI agents require access to detailed application performance data to make informed decisions in response to deployment issues or infrastructure failures. Establishing explainable guardrails is crucial to mitigate risks associated with decision-making by opaque ML models. These external checks reduce the impact of errors, ensuring that AI tools augment rather than complicate the development process.
Tracy Young
CEO and co-founder, TigerEye
AI tools can significantly enhance business processes by analyzing large datasets, such as CRM and ERP systems, to identify trends, flag risks, and deliver actionable insights. For example, an AI analyst can continuously learn from historical and real-time data to detect anomalies, forecast trends, and highlight emerging risks. This helps leaders to make faster, data-driven decisions while improving accuracy and efficiency.
Start by focusing on areas where AI can help with trend analysis and risk detection without changing processes — and then scale its capabilities to broader business operations.
Sabrina Farmer
CTO, GitLab
AI has shifted from a perceived job threat to a tool that empowers developers to focus on higher-value, critical-thinking tasks that only humans can effectively execute. Developers can most effectively implement AI agents by automating pre-work and routine tasks, including discovery phases, backlog prioritization, documentation generation, and code testing.
Automation is particularly well-suited for code testing because it provides end-to-end testing to understand how well a solution meets its intended value. This enables developers to be strategic and identify the work bringing the biggest business value.
Developers must remain aware of the risk of falling into a validation gap by placing excessive trust in AI outputs. This oversight can bypass essential critical thinking steps or accept AI-generated assumptions without verification. Such practices may lead to complications as AI tools increasingly integrate into software development processes.
Leadership should ensure that agents are run with the appropriate credentials and logging. By ensuring visibility into where agents are taking actions across your organization’s DevOps ecosystem, teams can avoid surprises or unintended consequences for the broader enterprise.
Raj Pai
VP of Product Management for Cloud AI, Google Cloud
The only way to successfully integrate high-quality agents into the software development process is to invest in high-quality data and people. AI agents require a strong foundation to be effective – so starting with refined, accurate data and sharp people who can augment the agent when needed are of the utmost importance.
Organizations should start small, run pilot programs, and fine-tune agents before deploying them more widely to avoid running into common challenges like accuracy, security, and integrating with existing tools.
Jared Palmer
VP of AI, Vercel
First, it’s important to understand that “agent” is still a hyped word. Agents are system prompts with some retrieval and tool calls that run in loops or state machines from a CD real scheduler with shared context and variables; they aren’t reliable yet. An agent that runs a task for five hours and comes back to you doesn’t exist yet, but that’s coming soon.
I recommend focusing on four areas of skill-building your dev team to ensure success with AI implementation:
Eval-Driven Development: Get good at evals and benchmarking. It’s the best way to systematically track and improve your outputs that’s not based on just vibes.
Prompt Engineering: Intuition goes a long way, but it’s important to learn latest prompt techniques and refine constantly based on your evals.
RAG (Retrieval Augmented Generation): Vector search alone isn’t enough anymore, you want hybrid retrieval and semantic reranking on top of appropriate chunking for your use case.
Fine Tuning and Post Training: Most teams over-invest here early. It’s important but is rarely your first problem. Start with evals, prompt engineering, and retrieval.
Steve Tack
Chief Product Officer, Dynatrace
Implementing AI-based tools into the software development process takes time, but the key is to have clear and well-defined use cases that align with existing software and development workflows. Done right, AI implementation can accelerate development cycles with intelligent automation, improve software quality, and eliminate manual effort on tasks like testing, security, debugging, deployment, and more – freeing developers to engage in strategic, high-level code development.
While adoption challenges are inevitable, this shouldn't deter developers from implementing new tools. Instead, these challenges should be anticipated and managed by observing the AI system’s behavior to improve and optimize its performance. For instance, the costs associated with GenAI are complex, often reaching up to 5x higher than traditional cloud services.
Additionally, GenAI services can exhibit erratic behavior due to unforeseen data scenarios or underlying system issues. LLMs are also prone to "hallucinations," which can amplify human biases and struggle to deliver highly personalized outputs. To address these issues and ensure success, a holistic approach to the observability, transparency, and cohesive understanding of AI-powered applications is crucial.
Kubernetes has become one of the most widely used tools in distributed system infrastructure, but powerful tools can rack up significant expenses without proper configuration or management. Seven members of our Roundtable offered advice this month on the best ways to control those costs.
DevOps walked so platform engineering could run, but building a standardized organizational approach to software development can cause as many problems as it aims to solve if the platform's foundation is shaky. Here's how eight experts think companies should approach platform engineering.
Every company understands the value of their corporate data, but it's easy to lose track of priorities when trying to update their toolsets, especially during the generative AI frenzy. Here's how eight experts think companies should navigate the tricky road to the modern data stack.
Despite some recent gains by law enforcement, ransomware remains a pernicious problem for companies large and small. Here's how eight experts advise preparing for ransomware attacks.