Product Manager’s Questions
1. How do you approach database schema design when requirements are likely to evolve?
Great Response: "I start with identifying core entities and their relationships based on current requirements and create a normalized schema. However, I also plan for flexibility by considering which fields might need extension in the future. I prefer using migrations for schema changes and maintain backward compatibility during transitions. For highly volatile areas, I might use a more flexible approach like using JSON columns for specific attributes that could evolve rapidly, while still keeping the core schema strongly typed. I also document my schema design decisions and assumptions to make future changes easier for the team."
Mediocre Response: "I create tables based on the requirements and normalize them to avoid redundancy. If requirements change, I create new tables or add columns as needed. I use an ORM to make changes easier and run migrations when deploying changes. I try to follow best practices like naming conventions and indexing important fields."
Poor Response: "I usually design the tables to fit exactly what we need right now to deliver quickly, and we can always change it later when requirements change. If we need flexibility, I prefer using NoSQL databases because you don't have to worry about schema. For relational databases, I often use generic column names like 'attribute1', 'attribute2' so we can repurpose them later."
2. How do you handle API versioning and backward compatibility?
Great Response: "I prefer explicit versioning in the URL path (like /v1/resource) for its clarity to API consumers. When introducing breaking changes, I maintain both versions for a deprecation period, clearly communicate timelines to stakeholders, and provide migration documentation. For minor changes, I ensure backward compatibility through careful implementation, like adding optional parameters rather than required ones. I also implement comprehensive integration tests that verify both new and old client behaviors continue to work. Additionally, I track API usage analytics to understand when it's safe to retire older versions."
Mediocre Response: "I use version numbers in the API URL and increment them when making breaking changes. I try to keep the old version working for a while so clients have time to update. I document the changes between versions and sometimes use feature flags to control when new functionality is available. I test that both versions work correctly before deploying."
Poor Response: "I try to avoid versioning altogether by not making breaking changes. When I have to change something, I just update the API and inform the frontend team so they can adapt their code. If we need to version, I add a version parameter that the client can send, and the backend handles different behaviors based on that parameter. This way we only maintain one codebase."
3. What strategies do you use to ensure your code remains maintainable as the application grows?
Great Response: "I focus on clear separation of concerns with well-defined interfaces between components. I follow SOLID principles and implement domain-driven design patterns when appropriate. For larger projects, I organize code into modules or microservices based on business domains rather than technical layers. I regularly refactor to eliminate tech debt, maintain comprehensive tests for confidence in changing code, and document architectural decisions. I also prioritize knowledge sharing through code reviews and documentation to ensure the entire team understands design patterns used in the codebase."
Mediocre Response: "I try to follow good coding practices like using descriptive names, keeping functions small, and adding comments to complex logic. I create unit tests for important functionality and use design patterns I'm familiar with. I also try to organize code into logical folders and namespaces and follow the team's conventions for consistency."
Poor Response: "I focus on getting features done quickly, then clean up the code when we have time. I document complex parts so others can understand them. I rely on frameworks to provide structure rather than spending time on architecture. When something becomes too complex, I usually suggest refactoring it entirely or creating a new service rather than trying to maintain difficult code."
4. How do you handle performance optimization for database queries?
Great Response: "I follow a data-driven approach, starting with identifying actual bottlenecks through profiling and monitoring in production environments. I analyze query execution plans to understand how the database processes queries and focus on optimizations with the highest impact. Techniques I employ include adding appropriate indexes, optimizing join conditions, using query caching where appropriate, and pagination for large result sets. For complex reports or analytics, I might implement materialized views or pre-aggregation tables that update on a schedule. I also consider database-specific optimizations and always verify improvements with measurable benchmarks."
Mediocre Response: "I check for slow queries in the logs and add indexes to columns that appear in WHERE clauses. I try to avoid using SELECT * and only retrieve the columns I need. For complicated queries, I use the database's query analyzer to see where it's spending time. I also implement caching for frequently accessed data and break down complex queries into simpler ones when possible."
Poor Response: "When queries are running slow, I first try to add indexes to speed them up. If that doesn't work, I usually implement caching at the application level so we don't hit the database as often. For really problematic queries, I might suggest increasing the server resources or optimizing on the frontend by retrieving less data or implementing pagination."
5. How do you approach error handling and logging in a backend system?
Great Response: "I implement a multi-layered approach to error handling. At the infrastructure level, I use global exception handlers to ensure a consistent response format while preserving context. For business logic, I use domain-specific exceptions that map to appropriate HTTP status codes. For logging, I implement structured logging with correlation IDs to trace requests across services, and use different severity levels appropriately. Critical errors trigger alerts, while including enough context information for debugging without exposing sensitive data. I also periodically analyze error patterns to identify recurring issues and improve system resilience. In distributed systems, I ensure errors propagate correctly between services while maintaining useful context."
Mediocre Response: "I wrap important operations in try-catch blocks and return appropriate error codes to the client. I log exceptions with their stack traces and include context information like user IDs or request parameters. For critical errors, I set up notifications to alert the team. I try to standardize error responses across the API to make it easier for frontend developers to handle them consistently."
Poor Response: "I make sure to catch all exceptions so the application doesn't crash and return error messages to the user. I log errors to a file or logging service so we can look them up later if needed. For specific errors I know might happen, like validation issues, I create custom error messages. Most of our error handling happens at the controller level where we catch anything that might have gone wrong."
6. How do you balance technical debt against delivery timelines?
Great Response: "I view technical debt management as an ongoing investment decision rather than an all-or-nothing approach. I categorize technical debt by impact and remediation cost, then address high-impact, low-effort items continuously as part of regular development. For larger issues, I document them with clear business impacts and propose specific remediation plans to stakeholders. When taking on new debt, I make it explicit and time-boxed with a defined payback strategy. I also allocate a percentage of each sprint (around 20%) for maintenance and refactoring to prevent accumulation. Most importantly, I communicate the long-term costs of technical debt in business terms to ensure alignment on priorities."
Mediocre Response: "I try to identify areas where we can make quick improvements without disrupting deadlines. For urgent features, I sometimes take shortcuts but document them as tech debt items in our backlog. I advocate for dedicated time in our sprints to address technical debt before it becomes a bigger problem. When new features are requested, I communicate to the product team if we need additional time to implement them properly."
Poor Response: "Meeting deadlines is usually the top priority, so I focus on delivering working features first. I keep a list of things that need improvement that we can address when we have more time. If technical issues start causing bugs or slowing down development significantly, I bring them up to prioritize fixing them. I rely on our QA and testing processes to catch any issues from rushing implementation."
7. How do you approach securing sensitive data in a backend system?
Great Response: "I implement security in multiple layers. First, I classify data sensitivity levels and design appropriate controls for each. For data at rest, I use field-level encryption for sensitive information and ensure database backups are encrypted. For data in transit, I enforce TLS and verify certificate validity. I implement proper authentication using industry standards like OAuth 2.0/OIDC and role-based access control with the principle of least privilege. For passwords specifically, I use adaptive hashing algorithms like bcrypt with appropriate work factors. I also implement audit logging for sensitive operations, regular security scanning, and ensure secrets management using dedicated tools rather than configuration files. Finally, I stay updated on security best practices specific to our tech stack."
Mediocre Response: "I ensure we use HTTPS for all API endpoints and implement authentication for protected resources. Sensitive data like passwords are hashed using secure algorithms, and we validate all user inputs to prevent injection attacks. I use environment variables for storing secrets rather than hardcoding them. For particularly sensitive data like credit card information, I would recommend using a third-party service that specializes in handling such data securely."
Poor Response: "I rely on our authentication system to keep unauthorized users out and make sure to validate inputs to prevent security issues. For sensitive data, I encrypt it before storing in the database. We have a security team that runs regular scans, so they usually catch any issues we might miss. I follow whatever security requirements are in our specifications and use the security features provided by our framework."
8. How do you approach designing a system that needs to scale to handle unpredictable traffic spikes?
Great Response: "I design for elasticity from the ground up with a combination of horizontal scaling and asynchronous processing. I separate stateful and stateless components, with stateless services deployed behind load balancers on auto-scaling infrastructure. For database scaling, I implement read replicas, connection pooling, and consider sharding strategies for write-heavy workloads. I use message queues to handle traffic spikes by buffering requests during peak times. For caching, I implement a multi-level strategy with local caches for frequent data and distributed caches for shared state. I also design circuit breakers and rate limiters to protect critical services during extreme load. Most importantly, I implement comprehensive monitoring with automated scaling triggers based on real-time metrics and validate the design with load testing simulating various traffic patterns."
Mediocre Response: "I design the application to be stateless so we can run multiple instances behind a load balancer. I implement caching for frequently accessed data to reduce database load. For processing that doesn't need to happen in real-time, I use message queues to handle tasks asynchronously. I make sure the database is properly indexed and consider read replicas for scaling read operations. I also set up monitoring so we can see when the system is under load and react accordingly."
Poor Response: "I would deploy the application on cloud infrastructure that can automatically scale up when traffic increases. I'd implement caching to reduce load on the database and optimize the most frequently used queries. If we start experiencing performance issues, we can upgrade to more powerful servers or add more instances. I'd also implement some sort of monitoring so we know when we're approaching capacity limits."
9. How do you handle cross-team dependencies when developing new backend features?
Great Response: "I start by mapping all dependencies early and establishing clear interfaces and contracts before implementation begins. I prefer using API specifications like OpenAPI to document and agree on interfaces, allowing teams to work independently against those contracts. For runtime dependencies, I implement stubbing and service virtualization to unblock development and testing. I schedule regular sync meetings with dependent teams and maintain a shared project board to track cross-team deliverables. When dependencies shift, I re-evaluate priorities and communicate impacts transparently. For longer-term solutions, I advocate for moving toward event-driven architectures and well-defined service boundaries to reduce tight coupling between teams."
Mediocre Response: "I identify which teams we need to coordinate with and set up meetings to discuss requirements and timelines. I try to document the APIs or integration points we need from other teams and share our own specifications so they know what to expect. I keep track of dependencies in our project management tool and follow up regularly to make sure they're on track. If there are delays, I communicate that to the product manager so we can adjust our timeline or scope."
Poor Response: "I reach out to the other teams when we need something from them and explain what we need. I try to work with whatever they can provide and adapt our implementation accordingly. If we get blocked waiting for another team, I switch to working on other tasks that don't have dependencies. When delays happen, I make sure to document that the delay was due to external dependencies so our team isn't held responsible."
10. How do you approach monitoring and alerting for backend services?
Great Response: "I implement a comprehensive observability strategy covering three key pillars: metrics, logs, and traces. For metrics, I track both technical indicators (CPU, memory, request rates) and business-relevant KPIs. I set up multi-level alerting with different severity thresholds—pages for critical customer-impacting issues and non-urgent notifications for warning signs. I follow the USE method (Utilization, Saturation, Errors) for resources and the RED method (Rate, Errors, Duration) for services. For distributed systems, I implement distributed tracing with correlation IDs to track requests across services. Most importantly, I focus on actionable alerts with clear remediation steps and continually refine alerting thresholds based on false positives/negatives to avoid alert fatigue."
Mediocre Response: "I set up monitoring for key metrics like CPU, memory usage, and response times. I implement health check endpoints that our monitoring system can ping regularly. For logging, I ensure we capture errors with enough context to debug issues. I set up alerts for when services go down or when error rates exceed normal thresholds. I also create dashboards that show the overall system health so we can spot trends before they become problems."
Poor Response: "I make sure all errors are logged with their stack traces so we can investigate when problems occur. I set up basic monitoring to alert us when the service goes down or becomes unresponsive. For performance issues, I rely on user reports and then check the logs to find what went wrong. Our operations team usually handles most of the monitoring setup based on what we tell them is important to track."
11. How do you design APIs that are both flexible for future requirements yet intuitive for consumers?
Great Response: "I focus on designing resource-oriented APIs that model business domains rather than implementation details. I follow REST principles for most APIs, using standard HTTP methods and status codes consistently. For flexibility, I implement versioning strategies and use hypermedia links where appropriate to allow API evolution. I design endpoints to be coarse-grained enough to minimize round trips but fine-grained enough to avoid unnecessary data transfer. For extending resources, I use mechanisms like optional fields or expansion parameters rather than creating new endpoints. I also provide comprehensive documentation with examples, create client SDKs for complex APIs, and collect feedback from API consumers during development. For particularly complex domains, I consider GraphQL to give consumers more control over the data they retrieve."
Mediocre Response: "I design RESTful APIs following standard conventions for endpoints and HTTP methods. I make sure to use proper status codes and provide consistent error responses. I document all endpoints with their parameters and response formats. For flexibility, I include optional parameters that might be needed in the future and consider backwards compatibility when making changes. I get feedback from frontend developers or API consumers during the design phase to make sure it meets their needs."
Poor Response: "I create endpoints based on what the frontend needs right now, making sure they return all the data needed for each screen. I use descriptive names so it's clear what each endpoint does. If requirements change, we can add new endpoints or parameters as needed. I document the API so other teams know how to use it. I usually follow the patterns used in the rest of our application for consistency."
12. How do you approach testing a complex backend system?
Great Response: "I implement a comprehensive testing strategy across multiple levels. At the unit level, I focus on testing business logic in isolation with high coverage. For integration tests, I verify interactions between components, using test containers for dependencies like databases. For API testing, I implement contract tests to ensure interfaces remain consistent. For critical flows, I add end-to-end tests that verify entire business processes. I complement automated tests with exploratory testing for edge cases and implement chaos engineering principles for resilience testing in distributed systems. Throughout, I emphasize test maintainability by creating abstractions for common testing patterns and focusing on testing behavior rather than implementation. I also implement continuous integration with progressive test execution, running faster tests first to provide quick feedback."
Mediocre Response: "I write unit tests for individual components focusing on the core business logic. For API endpoints, I create integration tests that verify the correct responses for different inputs. I use mocks or stubs for external dependencies to keep tests isolated and fast. I make sure to test both happy paths and error conditions. For critical functionality, I add end-to-end tests that verify the entire flow. I run all tests in our CI pipeline to catch issues early."
Poor Response: "I focus on testing the main functionality with unit tests for critical business logic. For integration points, I verify that they work correctly in our development environment. I rely on our QA team to catch any edge cases or issues through their testing process. If we find bugs in production, I make sure to add tests for those specific scenarios to prevent regression. Manual testing is usually sufficient for smaller changes."
13. How do you handle distributed transactions across multiple services or databases?
Great Response: "I avoid distributed transactions when possible by redesigning boundaries to keep related data together. When transactions across services are necessary, I implement the Saga pattern with choreography for simpler flows or orchestration for complex ones with clear compensating actions for rollbacks. For eventual consistency scenarios, I use event sourcing combined with the outbox pattern to ensure reliable event delivery without two-phase commits. I implement idempotent operations to handle retry scenarios safely and design for partial failures with clear reconciliation processes. For monitoring these flows, I implement distributed tracing to track transactions across services and have comprehensive observability to detect and resolve inconsistencies."
Mediocre Response: "I try to avoid distributed transactions when possible by designing service boundaries carefully. When they're necessary, I implement a two-phase commit protocol or use a saga pattern with compensating transactions to roll back changes if one step fails. I ensure operations are idempotent so they can be safely retried, and I implement logging at each step so we can troubleshoot issues. For simpler cases, I might use eventual consistency with background jobs that reconcile data."
Poor Response: "I try to keep related operations in the same service to avoid distributed transactions. When we need to update multiple systems, I implement API calls to each service and handle errors by retrying failed operations. If consistency is critical, I might suggest using a message queue to ensure all operations eventually complete. If something fails, we usually have monitoring in place to alert us so we can fix inconsistencies manually if needed."
14. How do you decide between different database technologies for a new project?
Great Response: "I evaluate database choices based on multiple dimensions of the specific use case. First, I analyze data access patterns—read/write ratios, query complexity, and transaction requirements—to determine if we need ACID compliance or if eventual consistency is acceptable. I consider scalability needs, both in terms of data volume and request throughput, and evaluate operational factors like backup strategies and team familiarity. For specific requirements like full-text search, geospatial queries, or time-series data, I consider specialized databases that excel in those areas. Often, I implement a polyglot persistence approach, using different database technologies for different components of the system based on their specific needs, with clear boundaries between them. I prototype critical data access patterns with realistic volumes to validate performance assumptions before committing to a technology."
Mediocre Response: "I consider the type of data we're storing and how it will be accessed. For structured data with clear relationships, I usually choose a relational database. For high-volume data with flexible schema requirements, I might use a NoSQL database like MongoDB. I also consider the scaling requirements—whether we need horizontal scaling or if vertical scaling would be sufficient. I take into account the team's familiarity with different technologies and the ecosystem of tools available for monitoring and management."
Poor Response: "I usually go with what the team is already familiar with to avoid a learning curve. Relational databases like PostgreSQL work for most use cases, so that's my default choice. If we have specific performance issues, I might look at NoSQL options. I also consider what our infrastructure team prefers to support since they'll be managing the database in production. New technologies might be interesting but introduce risk, so I prefer proven solutions."
15. How do you approach documentation for backend systems?
Great Response: "I treat documentation as a first-class citizen in the development process, not an afterthought. I implement a layered documentation approach: API specifications using standards like OpenAPI, which generate interactive documentation; architectural decision records (ADRs) that capture the context and reasoning behind significant design choices; operational runbooks for common procedures and troubleshooting; and high-level system architecture diagrams showing component relationships. I automate documentation where possible, generating API docs from code and keeping diagrams in version control with tools like C4 model or Mermaid. Most importantly, I integrate documentation maintenance into our regular workflow—updating docs is part of our definition of done for any feature, and we review documentation changes alongside code changes."
Mediocre Response: "I document API endpoints using tools like Swagger or OpenAPI to generate interactive documentation. I maintain a README file with setup instructions and key information about the project. For complex components, I add comments explaining the logic and any non-obvious decisions. I try to create diagrams for the overall architecture and important flows. I update documentation when making significant changes to ensure it stays current."
Poor Response: "I focus on writing clear code with good naming conventions so the code is self-documenting. I add comments for complex logic and maintain basic setup instructions in the README. For APIs, I usually create a simple document listing the endpoints and their parameters. If someone joins the team and needs help understanding the system, I'm happy to walk them through it personally."
16. How do you handle feature flags and progressive rollouts?
Great Response: "I implement feature flags as a core part of our deployment strategy, separating code deployment from feature activation. I categorize flags into different types—release flags for new features, experiment flags for A/B testing, and operational flags for system behavior—and treat each appropriately. For implementation, I use a dedicated feature flag service that provides real-time toggling without redeployments, fine-grained user targeting, and analytics on flag usage. I design flag evaluation to be performance-efficient, with local caching and minimal runtime impact. For progressive rollouts, I implement percentage-based deployments starting with internal users, then a small percentage of customers, monitoring key metrics at each stage. Most importantly, I enforce flag hygiene with explicit expiration dates for temporary flags and regular cleanup to prevent technical debt from abandoned flags."
Mediocre Response: "I use a feature flag system that allows us to enable or disable features without deploying new code. For important features, I implement percentage-based rollouts to test with a small group of users before expanding to everyone. I make sure our monitoring can track metrics specific to the new feature so we can detect any issues early. I keep track of which flags are active and try to remove them once a feature is fully deployed to avoid complexity in the codebase."
Poor Response: "I add configuration variables that can be toggled in our environment settings to turn features on or off. For rollouts, we usually deploy to our staging environment first, then to production but with the feature turned off, and then enable it when we're confident it's working correctly. I try to remember to remove old feature flags when they're no longer needed, but honestly, some of them stay in the code for a while."
17. How do you handle incoming feature requests that might conflict with your technical roadmap?
Great Response: "I approach this as a collaborative prioritization exercise rather than a binary choice. First, I analyze the request to understand its business value and urgency, while assessing its technical impact on our current roadmap. I then prepare multiple implementation options with different trade-offs between scope, timeline, and technical debt, including potential phased approaches. When discussing with stakeholders, I focus on outcomes rather than implementations and clearly communicate downstream impacts of reprioritization. If we decide to incorporate the request, I negotiate adjustments to the overall roadmap rather than simply adding work. For recurring conflicts, I advocate for establishing a more structured intake process with clear prioritization criteria that balance business needs and technical health."
Mediocre Response: "I discuss the request with the product team to understand its importance and urgency. I evaluate how much effort it would take and how it might impact our existing roadmap. If it's a high priority for the business, I work with the product manager to reprioritize our backlog, possibly moving some lower-priority items out. I try to propose technical solutions that can accommodate the request without completely derailing our existing plans, like implementing a simpler version first."
Poor Response: "I explain to the product team how this would impact our current timeline and the technical work we've planned. If they insist it's important, I add it to our backlog and we'll get to it after completing our current priorities. For urgent requests, I might suggest we bring in additional resources or put some technical improvements on hold to deliver the feature, though this usually creates technical debt we'll need to address later."
18. How do you approach refactoring legacy code that lacks tests?
Great Response: "I approach legacy refactoring with a strategic, incremental approach. First, I analyze the system to identify natural seams and boundaries for testing. Before making any changes, I add characterization tests that document current behavior, even if it's suboptimal, to ensure refactoring doesn't introduce regressions. I use techniques like the strangler fig pattern to gradually replace components while maintaining system functionality. For particularly complex areas, I might implement logging and monitoring before refactoring to better understand runtime behavior. I focus on high-value, high-risk areas first, delivering business value alongside technical improvements where possible. Throughout the process, I maintain a 'boy scout rule' approach—leaving each area better than I found it—while being careful not to expand scope beyond manageable chunks."
Mediocre Response: "I start by adding tests around the areas I plan to refactor, focusing on the external behavior rather than implementation details. I refactor in small, incremental steps, running the tests after each change to ensure I haven't broken anything. I prioritize parts of the code that are most frequently changed or cause the most bugs. I document the improvements I make so the team understands the new structure. If the code is particularly complex, I might pair with someone who has more knowledge of that area."
Poor Response: "I identify the most problematic areas and rewrite them with better practices. Since adding tests to legacy code is often difficult, I focus on getting the new implementation right and then add tests afterward. I try to make improvements alongside feature work so we're not just doing refactoring with no visible benefit. If we encounter bugs during refactoring, we fix them as we go and make sure our new implementation works correctly."
19. How do you ensure service reliability and handle incidents when they occur?
Great Response: "I build reliability in from the start with defensive design patterns like circuit breakers, bulkheads, and timeouts to prevent cascading failures. I implement comprehensive observability with correlated metrics, logs, and traces that enable quick diagnosis. For incident response, I follow a structured process: first stabilize the system using techniques like feature toggles or traffic shedding, then diagnose and fix the root cause. After each incident, I conduct blameless postmortems to identify systemic improvements, not just specific fixes. I also implement chaos engineering practices to proactively discover weaknesses before they affect users. Most importantly, I establish clear SLOs based on customer impact and use error budgets to balance feature development against reliability work."
Mediocre Response: "I implement monitoring and alerting for key metrics and error rates so we can detect issues quickly. I ensure we have logging that helps identify the source of problems. For critical paths, I implement circuit breakers and fallback mechanisms to prevent cascading failures. When incidents occur, I follow our incident response process: acknowledge the alert, investigate the issue, apply a fix or workaround, and then follow up with a more permanent solution if needed. Afterward, we conduct a postmortem to identify what went wrong and how to prevent similar issues in the future."
Poor Response: "I make sure we have monitoring in place to alert us when things go wrong. When an incident happens, I check the logs to find what's causing the issue and fix it as quickly as possible. I try to add error handling for known failure points and rely on our QA process to catch issues before they reach production. After resolving an incident, I document what happened so we can address similar issues faster in the future."
20. How do you evaluate and incorporate new technologies into your backend stack?
Great Response: "I follow a structured evaluation framework that balances innovation with risk management. First, I clearly define the problem we're trying to solve and whether existing technologies are insufficient. For promising new technologies, I conduct spike solutions to test them against specific use cases and assess factors beyond technical capabilities—community health, maintenance track record, security considerations, and operational complexity. I prefer introducing new technologies in non-critical paths first to gain experience before wider adoption. Throughout this process, I involve the team in evaluation and decision-making to ensure buy-in and knowledge sharing. Most importantly, I document the decision-making process and establish clear success criteria to evaluate whether the technology delivers the expected benefits once implemented."
Mediocre Response: "I research new technologies that might address challenges we're facing and evaluate them based on factors like performance, community support, and compatibility with our existing stack. I usually create a proof of concept to test how well it works for our specific needs. Before adopting anything new, I discuss it with the team to get their input and make sure everyone is comfortable learning it. I try to introduce new technologies incrementally rather than changing too many things at once."
Poor Response: "I keep up with industry trends and suggest technologies that could improve our stack or solve problems we're facing. I usually try out promising technologies in small projects first to see how they work. If something seems valuable, I advocate for incorporating it into our stack. I rely on documentation and tutorials to help the team learn new technologies. I focus on using established technologies with good support rather than bleeding-edge options."
Last updated