Part 3: From Public Purpose to Digital Access

Roman architecture was never only about engineering marvels—it was about serving the people.

  • Aqueducts carried water into cities, powering daily life and public health.
  • Forums became centers of democracy, enabling civic discourse.
  • Public baths weren’t just about hygiene—they built community and connection.

Every structure was designed for a public purpose, empowering citizens and improving lives at scale.


AWS Services as Digital Public Infrastructure

AWS generative AI reflects this civic philosophy by democratizing access to cutting-edge models through cloud-native services. Instead of aqueducts and forums, we have APIs and managed services that distribute intelligence and capability:

  • Amazon Bedrock
    Provides serverless APIs to foundation models from providers like Anthropic, Meta, Cohere, and Mistral. Developers don’t need to manage infrastructure or train massive models—they can instantly consume them, just as Roman citizens accessed aqueduct water without needing to understand the engineering behind it.
  • Amazon SageMaker
    Functions as the forum for builders and scientists. It offers a collaborative environment to build, train, fine-tune, and deploy custom generative AI models. Features like SageMaker StudioJumpStart, and Model Registryensure that teams can innovate together with governance and efficiency.
  • Inferentia & Trainium Chips
    These custom AWS chips are the concrete and aqueduct channels of today’s AI infrastructure. They provide high-performance, cost-optimized inference and training for generative models. By lowering compute costs, they make AI more accessible to startups and enterprises alike.
  • Amazon API Gateway & Lambda
    Think of these as the digital conduits—akin to aqueduct pipes—that distribute AI capabilities to millions of users via apps, websites, and services, without requiring heavy infrastructure investments.
  • Amazon OpenSearch & Kendra
    These services act like the forums of old—organizing and retrieving information so that people can ask questions and access knowledge easily. When paired with generative AI, they enable natural language search and contextual insights across massive data sets.

The Legacy Parallel

Roman concrete still holds strong after 2,000 years, a testament to their vision for longevity. Similarly, AWS’s cloud-native AI stack—built on principles of scalability, modularity, and sustainability—ensures innovation can endure and adapt for generations of technology.

Both remind us that the greatest architectures, whether carved in stone or provisioned in code, are those that serve people broadly and meaningfully.

This concludes the three part comparison of Roman architecture to AWS generative AI services.

Part 2: From Arches to Pipelines

The genius of Roman engineering wasn’t just in their monuments—it was in their patterns.

  • Arches distributed heavy loads with elegance.
  • Domes enclosed vast spaces without collapsing.
  • Concrete gave them strength, flexibility, and the ability to scale construction.

These patterns were reusable, adaptable, and reliable—allowing Rome to expand from one city into an empire.

In the digital world, AWS generative AI applies the same principle of reusable patterns:

  • SageMaker pipelines are today’s arches—distributing workflows, balancing complexity, and channeling resources efficiently.
  • Bedrock APIs are modern domes—enclosing sophisticated models in simple, accessible interfaces.
  • Inferentia and Trainium chips are the new concrete—providing a durable foundation of performance and efficiency.

Both Rome and AWS solved the same problem: how do you build something that scales reliably without reinventing from scratch every time?

Great design is timeless—whether in stone or in code.

From Arches to Algorithms: Foundations Across Time

When we think of Roman architecture, what comes to mind? Colosseums, aqueducts, and basilicas—structures that stood the test of time. The Romans weren’t just building for beauty. They engineered for symmetry, durability, and public utility. Their aqueducts carried water across miles with remarkable precision, and their basilicas and forums became centers of civic life and governance.

Now, fast forward nearly 2,000 years. Today’s architects of generative AI face a very different medium—code and cloud instead of stone and marble—but the design questions aren’t so different.

In the world of AWS generative AI, the foundations are about scalability and modularity. Instead of concrete and arches, we build with services like:

  • Amazon SageMaker for streamlined training and deployment, bringing together widely adopted AWS machine learning (ML) and analytics capabilities, the next generation of Amazon SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Collaborate and build faster from a unified studio using familiar AWS tools for model development in SageMaker AI (including HyperPodJumpStart, and MLOps), generative AI, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development. Access all your data whether it’s stored in data lakes, data warehouses, or third-party or federated data sources, with governance built in to meet enterprise security needs.

  • Amazon Bedrock for direct access to generative AI models via APIs. Amazon Bedrock is a comprehensive, secure, and flexible service for building generative AI applications and agents. Amazon Bedrock connects you to leading foundation models (FMs), services to deploy and operate agents, and tools for fine-tuning, safeguarding, and optimizing models along with knowledge bases to connect applications to your latest data so that you have everything you need to quickly move from experimentation to real-world deployment.

  • AWS Inferentia chips to deliver cost-efficient performance at scale. AWS Inferentia chips are designed by AWS to deliver high performance at the lowest cost in Amazon EC2 for your deep learning (DL) and generative AI inference applications. 

Just as Roman engineers thought about structures that would last for centuries, AWS engineers design digital systems that can scale globally, adapt instantly, and endure change.

The underlying truth is timeless: whether in stone or in cloud, strong foundations determine what endures. Rome’s enduring arches echo in today’s scalable pipelines. Both ask the same question: what can we build today that will still matter tomorrow?

Weekly Roundup Of Tech News – 5/9/2021

  1. Software Development: US Supreme Court Rules on Key Software Development Practice
    • What: Supreme Court ruled in favor of Google about its usage of Java SE compatible programming interface for Android Development as “Fair Use” in a case filed by Oracle.
    • How: Even though Google used about four-tenths of a percentage of Java Code, and that such code was further incorporated into a totally different product was transformative enough use of the code at issue to qualify its fair use of that code.
    • Why it matters: What this means for the developers is that they can continue utilizing the open source APIs with the understanding that implementations matter more than the definition. As such a big win for the development community.
  2. Artificial Intelligence: USPS turns to AI to boost Package Processing
    • What: USPS handles roughly 129 billion pieces of mail and 7.3 billion packages last year. Tracking these have been difficult. A federal data scientist proposed to deploy the edge AI servers at the postal servicing processing centers system in an effort to gain and share more data points.
    • How: NVDIA working with USPS created Edge Computing Infrastructure Program or ECIP, a distributed edge AI system now running at USPS locations, via NVDIA EGX Platform.
    • Why it matters: The system used to take 8 to 10 people several days to track down items now takes one or two hours with the ECIP AI Program. This enables USPS to track down any item in transit to better manage the deliveries.
  3. Cryptocurrency:  Digital Dollar Project to launch currency pilots
    • What: U.S Nonprofit Digital Dollar Project said it will launch five pilot programs over next 12 months to test use cases for US Central Bank Digital Currency (CBDC)
    • How: Private-sector pilots are funded by Accenture Plc and involve financial firms, retailers and NGOs to generate data to help US Policy makers develop digital dollar through central banks.
    • Why it matters: The data derived can pave way for creating US CBDC that will assert its place as a digital currency and will help larger adoption of cryptocurrencies by the mainstream population.

Google Cloud Professional Architect Certification Notes

For the past several months, I have been enrolled in Coursera to learn Google Cloud courses thinking of getting my certification by end of third quarter of 2019. I am happy to announce I took the exam and completed the certification on 9/13. Prior to that I might have spent many hours of video content and google cloud documentation. It is amazing how Google has built the infrastructure services thoughtfully to solve each and every aspect of application development and hardware allocation in spite of being late entrant to the cloud offerings. In summary, my journey is below:

  1. Enrolled in Google Kickstart programs offered in collaboration with Coursera
  2. Completed challenge quests in Qwicklabs
  3. Took five practice exams in Udemy (hat tip to Nizam Guntakal for recommending it)
  4. Reviewed notes from fellow architect

Overall enjoyed the journey!

 See my certification. 

Google Cloud Platform: Reference architecture for Data Warehouse

It had been a great journey to learn and understand Google Cloud Platform also called as GCP. Among the top cloud providers, Google seemed to have nailed the cloud technology very well. As I was exploring the services, I got some Proof of concepts working and some reference architecture defined. One of the reference architecture defined is for the Data Warehouse. Our use case is creating an analytics dashboard and reporting platform for the internal and external users. The solution requires three standard serverless services from Google Cloud Platform:

  1. Cloud Dataflow – fully managed service from Google for streaming, batch-processing and enriching the data ingested into various storage options in Google
  2. BigQuery – serverless, highly-scalable, cloud Data Warehouse with a built-in In-memory BI Engine and Machine learning capabilities
  3. DataStudio – serverless BI engine, highly-scalable with flexible suite of data analytics tool

In this case, let us assume there are four sources Source 1, 2 and 3 residing within US and Source 4 residing outside of US requiring some data separation. Cloud Dataflow powered by Apache Beam can be utilized to stream or batch ingest the data from the sources. We can develop a pipeline for one source and leverage it for other sources. Dataflow can be created with Java or Python. Once we have the data, the industry practice is to have a Data Lake in BigQuery to store the raw data for in-depth analytics or run machine learning algorithms. From there dimensional modeling may or may not be required depending on the nature of the end output. For the benefit of clarity BigQuery is shown in both the Data Lake and Data Warehouse but the structures may reside as one. Both Google DataStudio which is now called Google Cloud BI Solution and Tableau are visualization solutions. Either of these could be extended to support the goal of the organization. This provides a high level overview for a Data Warehousing reference architecture in the Google Cloud Platform.