<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>OteMeta</title>
  
  <subtitle>Fierce as a tiger in the heart, delicately smelling the roses.</subtitle>
  <link href="https://www.nablepart.com/atom.xml" rel="self"/>
  
  <link href="https://www.nablepart.com/"/>
  <updated>2025-08-25T09:00:39.790Z</updated>
  <id>https://www.nablepart.com/</id>
  
  <author>
    <name>John Doe</name>
    
  </author>
  
  <generator uri="https://hexo.io/">Hexo</generator>
  
  <entry>
    <title>An in-depth look at distributed CAP and BASE theory</title>
    <link href="https://www.nablepart.com/a0866d0cfc29/"/>
    <id>https://www.nablepart.com/a0866d0cfc29/</id>
    <published>2025-08-25T09:00:39.790Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>In the field of computer science, distributed systems are a challenging research area and an essential optimization practice for Internet applications, while <strong>CAP theory and BASE theory are two key concepts</strong> in distributed systems.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/eaed3a9677094bf6a44ee1762a826245%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h2 id="2-What-is-a-distributed-system"><a href="#2-What-is-a-distributed-system" class="headerlink" title="2. What is a distributed system"></a>2. What is a distributed system</h2><p>First, let’s talk about distributed systems. You can think of a distributed system as a large network of computers, consisting of multiple computer or server nodes, which may be located in different geographical locations.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/4571e89f825647ef92e3e71e6086beaf%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As shown in the figure, the three nodes of the application layer are published in different cities. <strong>These nodes can communicate and collaborate with each other to accomplish complex tasks</strong>.</p><p>Imagine you are a team leader with a task to accomplish. If you complete it alone, it may take a long time.</p><p>But if you break the task down into several subtasks and assign them to your team members, they can <strong>work in parallel</strong> and complete the task faster. This is the core idea of distributed systems.</p><h2 id="3-CAP-Theory"><a href="#3-CAP-Theory" class="headerlink" title="3 CAP Theory"></a>3 CAP Theory</h2><p>Next, let’s talk about CAP theory, which is a very important principle in the design of distributed systems.</p><p>CAP refers to the three basic principles of <strong>Consistency, Availability, and Partition tolerance</strong> in distributed systems.</p><h3 id="C-Consistency"><a href="#C-Consistency" class="headerlink" title="C - Consistency"></a>C - Consistency</h3><p>Consistency means that no matter which node of the distributed system you read from, you get the same copy of the data and it ensures the accuracy of the data.</p><p>In distributed systems, there are three broad types of consistency which are Strong Consistency, Weak Consistency and Final Consistency.</p><h4 id="Strong-Consistency"><a href="#Strong-Consistency" class="headerlink" title="Strong Consistency"></a>Strong Consistency</h4><p>Strong consistency requires that when a user accesses data in a distributed system, the data should be exactly the same regardless of the response from any node.</p><p>For example, there are 10 pairs of sneakers left in the inventory in the order system, Zhang San just bought a pair of sneakers, after the data update is completed, the next number of sneakers seen by Li Si is only 9 pairs, otherwise there may be overselling.</p><p>However, this requires more time and effort to coordinate, just like when Li Si is buying shoes, he must queue up and wait for Zhang San’s purchase action to finish first before he can continue, which is less efficient.</p><h4 id="Weak-Consistency"><a href="#Weak-Consistency" class="headerlink" title="Weak Consistency"></a>Weak Consistency</h4><p>Weak consistency means that after the data in a distributed system has been updated, it is also allowed to allow subsequent accesses to get the old data before the update.</p><p>It is like going to a party where everyone has their own clock. The time of each clock may be a little different, but that doesn’t stop everyone from getting together and having fun.</p><p>Weak consistency improves the efficiency of the business, but sometimes it can lead to some confusion; imagine the long wait if the party-goers’ clocks are too far off.</p><h4 id="Final-Consistency"><a href="#Final-Consistency" class="headerlink" title="Final Consistency"></a>Final Consistency</h4><p>Final consistency is a special form of weak consistency that requires that the system’s data update is complete, and after a period of time, all subsequent accesses get the latest data.</p><p>It’s like message propagation in the circle of friends. When you send a message, it is not immediately seen by all your friends, but eventually, everyone will see the same message.</p><p>The vast majority of general business systems use <strong>final consistency</strong> as the design philosophy for distributed systems based on cost-effective considerations.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/7ab2262cd7d64ca6bb42c69b49308ab2%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Consistency in <strong>CAP theory, on the other hand, requires strong consistency</strong>. As described in the official documentation: <code>All nodes see the same data at the same time</code>.</p><h3 id="A-Availability"><a href="#A-Availability" class="headerlink" title="A - Availability"></a>A - Availability</h3><p>Availability means that every request of a distributed system should be responded to and should be completed within a limited time.</p><p>Availability ensures the stability and reliability of the system, and it describes the ability of the system to serve the user well, without user actions failing or access timing out, which affects the user experience.</p><p>It is the official term <code>Reads and writes always succeed</code>, the service is always available within the normal response time.</p><h3 id="P-Partition-Tolerance"><a href="#P-Partition-Tolerance" class="headerlink" title="P - Partition Tolerance"></a>P - Partition Tolerance</h3><p>Partition Tolerance means that <strong>the system can continue to run in case of network partition or communication failure</strong>, that is, if the network communication between nodes fails, or one of the nodes in the system has a problem, we still need to ensure that the business system is available.</p><p>That is, <code>The system continues to operate despite arbitrary message loss or failure of part of the system</code>, the distributed system is still able to externally provide services that satisfy consistency or availability in the event of a node or network partition failure.</p><h2 id="4-Characteristics-of-CAP"><a href="#4-Characteristics-of-CAP" class="headerlink" title="4. Characteristics of CAP"></a>4. Characteristics of CAP</h2><h2 id="4-1-Importance-of-partition-fault-tolerance"><a href="#4-1-Importance-of-partition-fault-tolerance" class="headerlink" title="4.1 Importance of partition fault tolerance"></a>4.1 Importance of partition fault tolerance</h2><p>At this point, students with a basic knowledge of distributed computing may ask, “CAP theory is indeed very important, but it seems that these three characteristics cannot be satisfied at the same time,” right?</p><p>Yes, this is the core idea of CAP theory.</p><p>CAP theory tells us that in a distributed system, we can only satisfy at most two of the characteristics at the same time, but not all three at the same time.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/6351fccdc0dd4faeaa5c66bd6a9a1a73%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Why can’t C, A, and P be both? First of all, we have to know that in a distributed system, due to the unreliability of the network, in order to ensure that the service can always be provided to the outside world, so ** partition fault tolerance must be guaranteed **.</p><p>Imagine if there is only one partition, talk about distributed is meaningless. And more than one partition, there must be partition failure problems, distributed systems to ensure partition fault tolerance has become the most basic claim.</p><p>So now we only need to <strong>consider whether we can satisfy both consistency and availability</strong> on the basis of partition fault tolerance, which we can prove by using the counterfactual method.</p><h3 id="4-2-AP-Or-CP"><a href="#4-2-AP-Or-CP" class="headerlink" title="4.2 AP Or CP"></a>4.2 AP Or CP</h3><p>Suppose we now have two partitions, P1 and P2, with the same data, D1 and D2, on the partitions, which are now identical.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2d2744cf1d7144e8980b1951d1cf235e%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Next, a request 1 accesses P1 and changes the data on D1. Then another request 2 accesses P2 to access the same data on D2.</p><p>At this point, we need to make a tradeoff.</p><h4 id="Ensure-consistency-first"><a href="#Ensure-consistency-first" class="headerlink" title="Ensure consistency first"></a>Ensure consistency first</h4><p>If we first guarantee to satisfy consistency and partition fault tolerance, i.e. CP.</p><p>This process can easily occur: D1 has updated the data, but when querying D2, the data is returned as old.</p><p>In order to ensure that D2 and D1 data are completely consistent, you must lock the D2 data on P2 when updating the D1 data, and then synchronize the D2 update after the D1 update is complete.</p><p>In this process, the locked D2 will not be able to give a real-time response to request 2, that is, violating the availability of P2.</p><p>Therefore, under the premise of consistency, CAP can not be satisfied at the same time.</p><h4 id="Guarantee-availability-first"><a href="#Guarantee-availability-first" class="headerlink" title="Guarantee availability first"></a>Guarantee availability first</h4><p>If availability and partition fault tolerance are guaranteed to be met first, that is, AP.</p><p>Availability requires that both P1 and P2 can respond in real time, so when D2 has just finished updating and has not yet been synchronized to D1, the data in the two DBs is inconsistent, which violates the data consistency on P1 and P2.</p><p>Therefore, under the premise of availability, CAP cannot be satisfied at the same time.</p><h3 id="4-3-CAP-tradeoffs"><a href="#4-3-CAP-tradeoffs" class="headerlink" title="4.3 CAP tradeoffs"></a>4.3 CAP tradeoffs</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/bd611a4f37ed455aaedf48c4fbf064b9%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>CAP You can’t have all three, so what should you choose? Generally according to our business we can have the following choices.</p><h4 id="Satisfy-consistency-and-partition-fault-tolerance-CP"><a href="#Satisfy-consistency-and-partition-fault-tolerance-CP" class="headerlink" title="Satisfy consistency and partition fault tolerance CP"></a>Satisfy consistency and partition fault tolerance CP</h4><p>Guarantees strong consistency of partitioning (C) and does not require availability (A).</p><p>It is equivalent to waiting for the data to be fully synchronized before the request arrives at a certain system before getting the data response from the system. Generally, this model is used in financial systems where the data needs to be strictly consistent.</p><h4 id="Meet-Availability-and-Partition-Fault-Tolerance-AP"><a href="#Meet-Availability-and-Partition-Fault-Tolerance-AP" class="headerlink" title="Meet Availability and Partition Fault Tolerance AP"></a>Meet Availability and Partition Fault Tolerance AP</h4><p>Availability of partitions is guaranteed (A) and strong consistency is not required (C).</p><p>When requesting access to data in a partition, you may get old data that is not synchronized. This model generally only requires that the data meets the final consistency, which in turn ensures system response speed and high availability.</p><p>AP is widely used in the industry, such as the famous BASE theory (discussed in detail below).</p><h4 id="Satisfying-Availability-and-Consistency-AC"><a href="#Satisfying-Availability-and-Consistency-AC" class="headerlink" title="Satisfying Availability and Consistency AC"></a>Satisfying Availability and Consistency AC</h4><p>As mentioned above, it is not possible to guarantee both strong consistency (C) and availability (A) of a system in a distributed system.</p><p>This is because partitioning in a distributed system is objectively unavoidable, whereas databases in a monolithic system can guarantee data consistency and availability through transactions, such as the four main characteristics of transactions (Atomicity, Consistency, Isolation, and Durability, or ACID for short) in MySQL.</p><h2 id="5-BASE-Theory"><a href="#5-BASE-Theory" class="headerlink" title="5. BASE Theory"></a>5. BASE Theory</h2><p>The BASE theory summarizes the practice of distributed systems on the Internet today. Its core idea is that since the cost of achieving strong consistency in a distributed system is too high, it is better to settle for the second best.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/a1a6f98f0742448ea5d2cd21b7f86060%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>It is only necessary for each application partition to ensure data consistency to the best of its ability on the basis of providing highly available services, that is, to ensure the <strong>final consistency</strong> of the data.</p><p>BASE theory is the CAP to ensure partition fault tolerance (P) under the premise of availability (A) and consistency (C) of the trade-offs, which consists of <strong>Basically Available (Basic Available), Soft State (Soft State), Eventually-Consistent (Final Consistency)</strong> three aspects of the composition, referred to as the BASE Theory.</p><p>In distributed systems, CAP theory provides a theoretical framework, while BASE theory provides a guiding principle for practical operation.</p><h3 id="5-1-Basic-Availability"><a href="#5-1-Basic-Availability" class="headerlink" title="5.1 Basic Availability"></a>5.1 Basic Availability</h3><p>BASE theory recognizes that a distributed system may choose to reduce performance or consistency requirements in order to maintain basic availability in the face of failures or anomalies.</p><p>This means that the system may experience some transient inconsistencies, but will eventually reach a consistent state.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/4d5b6404071740b0a2d7b219727a7283%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Just as a system design for a banking system generally has functional and non-functional requirements, we first need to ensure basic usability of the core functional requirements.</p><h4 id="Functional-Requirements"><a href="#Functional-Requirements" class="headerlink" title="Functional Requirements"></a>Functional Requirements</h4><p>In a banking system, transaction modules such as user withdrawals, transfers, etc. are the core functions, which are the basic needs of users and cannot go wrong.</p><p>The non-core functions can be abnormal, but need to be guaranteed to be fixed within a period of time.</p><h4 id="Non-functional-requirements"><a href="#Non-functional-requirements" class="headerlink" title="Non-functional requirements"></a>Non-functional requirements</h4><p>Non-functional requirements refer to other requirements that user’s business does not depend on, such as performance-related: users are required to transfer money within 0.5 seconds, but due to network delays and other reasons, the response can be delayed to 1~2 seconds.</p><p>Due to such anomalies in the system, thus affecting the high availability of the system, but the core process is still available, i.e. basic availability.</p><h3 id="5-2-Soft-State"><a href="#5-2-Soft-State" class="headerlink" title="5.2 Soft State"></a>5.2 Soft State</h3><p>Soft state means that <strong>system services may be in an intermediate state</strong>, and data may be delayed in synchronization in the process of ensuring consistency, but it will not affect the availability of the system.</p><p>For example, after we have finished paying for a train ticket, we may be in an intermediate waiting state that is neither fully successful nor failed. The user has to wait for the system’s data to be fully synchronized before getting the final status of whether or not the ticket was purchased successfully.</p><p>BASE theory recognizes that in a distributed system, the state may soften over time rather than reaching a consistent state immediately**.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/96d629d8a59c44e7a45733847d4895be%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>This means that we need to tolerate some state uncertainty, e.g., we are not sure if we can waitlist in a train ticket waiting queue.</p><h3 id="5-3-Final-Consistency"><a href="#5-3-Final-Consistency" class="headerlink" title="5.3 Final Consistency"></a>5.3 Final Consistency</h3><p>Eventual consistency is the core idea of BASE theory. It states that ** distributed systems can remain inconsistent for a period of time, but eventually converge to a consistent state. **</p><p>It is not like strong consistency, which requires partitioned data to be consistent in real time, resulting in a costly synchronization of system data. It is also not like weak consistency, data updates are not guaranteed to be consistent, resulting in subsequent requests can only access the old data.</p><p>Most of the current industry’s distributed systems, and even relational database systems, are realized with eventual consistency. For example, MySQL’s master-slave backups use <code>binlog</code> logs and listening threads to keep the data in the slaves and masters eventually consistent over time.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ca77d296a143495ebe0a570f5d627d00%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>In a nutshell, the BASE theory actually sacrifices strong consistency of data across nodes and allows data from different nodes to be inconsistent over time to obtain higher performance and high availability.</p><p>In a monolithic system, databases can still achieve strong consistency of transactions through ACID, but distributed transactions need to consider node communication delays and network failures.</p><p>Therefore, BASE theory is the scheme we often use in real distributed systems.</p>]]></content>
    
    
    <summary type="html">In the field of computer science, distributed systems are a challenging research direction and an essential optimization practice in Internet applications, while CAP theory and BASE theory are two key concepts in distributed systems.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Distributed" scheme="https://www.nablepart.com/tags/Distributed/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="optimization" scheme="https://www.nablepart.com/tags/optimization/"/>
    
    <category term="distributed" scheme="https://www.nablepart.com/tags/distributed/"/>
    
    <category term="systems" scheme="https://www.nablepart.com/tags/systems/"/>
    
  </entry>
  
  <entry>
    <title>U.S. Stock Market Surges on Tariff Relief and Tech Rally: A Deep Dive into May 2025&#39;s Volatility​</title>
    <link href="https://www.nablepart.com/t2/Information-20250527/"/>
    <id>https://www.nablepart.com/t2/Information-20250527/</id>
    <published>2025-05-29T01:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>​​Title: U.S. Stock Market Surges on Tariff Relief and Tech Rally: A Deep Dive into May 2025’s Volatility​​<br>​​By [Your Name], Financial Analyst​​<br>​​May 29, 2025​​</p><h2 id="​Market-Overview-A-Robust-Recovery-Amid-Policy-Shifts​​"><a href="#​Market-Overview-A-Robust-Recovery-Amid-Policy-Shifts​​" class="headerlink" title="​Market Overview: A Robust Recovery Amid Policy Shifts​​"></a>​Market Overview: A Robust Recovery Amid Policy Shifts​​</h2><p>The U.S. stock market witnessed a dramatic rebound in late May 2025, driven by easing trade tensions and resilient tech sector performance. On May 27, the S&amp;P 500 surged 2.05% to 5,921.54, nearing its February all-time high, while the Nasdaq Composite jumped 2.47% to 19,199.16, marking its strongest single-day gain since April. The Dow Jones Industrial Average rose 1.78%, closing at 42,343.65, as investor sentiment shifted from caution to optimism. This rally followed weeks of volatility tied to trade policy uncertainty, with the S&amp;P 500 now up 6.3% year-to-date and the Nasdaq reclaiming a 10% gain for 2025.</p><!-- post_1 --><p><ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-5641491107630454" data-ad-slot="6799528827" data-ad-format="auto" data-full-width-responsive="true"></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});</script></p><p>The turnaround was catalyzed by President Trump’s decision to delay imposing 50% tariffs on EU goods until July 9, coupled with progress in U.S.-EU trade negotiations. This move alleviated fears of an immediate economic shock, with Apollo Wealth Management’s Eric Steiner noting, “The market is learning to navigate Trump’s negotiation tactics, pricing in pauses but remaining wary of abrupt reversals”.</p><h2 id="​​Tech-Stocks-Lead-the-Charge-AI-and-Semiconductor-Dominance​​"><a href="#​​Tech-Stocks-Lead-the-Charge-AI-and-Semiconductor-Dominance​​" class="headerlink" title="​​Tech Stocks Lead the Charge: AI and Semiconductor Dominance​​"></a>​​Tech Stocks Lead the Charge: AI and Semiconductor Dominance​​</h2><p>Technology giants spearheaded the recovery, reflecting their outsized influence on market momentum. Tesla surged nearly 7% on renewed enthusiasm for electric vehicles and AI-driven manufacturing innovations, while Nvidia climbed over 3% ahead of its earnings report, which projected a 66% year-over-year revenue jump. The “Magnificent Seven” (Apple, Microsoft, Amazon, Meta, Alphabet, Nvidia, and Tesla) collectively added $320 billion in market cap, underscoring their role as market stabilizers.</p><p>Semiconductor stocks also shone, with KLA Corp and ASML rising over 4% and 3%, respectively, fueled by demand for AI infrastructure and easing supply chain constraints. The sector’s resilience aligns with Fidelity International’s 2025 outlook, which emphasized tech earnings as a “profit pivot” for broader market gains.</p><h2 id="​Trade-Policy-A-Double-Edged-Sword​​"><a href="#​Trade-Policy-A-Double-Edged-Sword​​" class="headerlink" title="​Trade Policy: A Double-Edged Sword​​"></a>​Trade Policy: A Double-Edged Sword​​</h2><p>Tariff developments remained a critical driver. The May 27 tariff delay reversed a mid-May selloff triggered by Trump’s earlier threats, which had sent the S&amp;P 500 down 2.6% in a single week. Investors now appear desensitized to trade rhetoric, with Paul Nolte of Murphy &amp; Sylvest Wealth Management observing, “The market has decoded Trump’s brinkmanship—sell on threats, buy on delays”.</p><p>However, risks linger. Citi’s Chief Economist Nathan Sheets warned that Trump’s “unprecedentedly aggressive” trade policies could slash global growth to 2.3% in 2025, with U.S. deficits projected to average 6% of GDP. The World Bank echoed concerns, noting that prolonged tariffs might shave 0.5% off U.S. GDP by year-end.</p><h2 id="​Economic-Data-Mixed-Signals-and-Inflation-Anxiety​​"><a href="#​Economic-Data-Mixed-Signals-and-Inflation-Anxiety​​" class="headerlink" title="​Economic Data: Mixed Signals and Inflation Anxiety​​"></a>​Economic Data: Mixed Signals and Inflation Anxiety​​</h2><p>Recent economic indicators painted a nuanced picture. The May Consumer Confidence Index rebounded 14.4%, signaling renewed household optimism, while the U-3 unemployment rate held steady at 4.1%. However, the U-6 underemployment rate ticked higher, and 44% of consumers anticipated rising joblessness—a red flag for spending.</p><p>Inflation remains a wildcard. Despite core CPI cooling to 3.2%, Trump’s “reciprocal tariffs” have pushed 1-year inflation expectations to 7.3%, the highest since the 1980s. Atlanta Fed President Raphael Bostic reiterated concerns, stating, “Tariffs could reignite price pressures, complicating the Fed’s path”.<br>​</p><h2 id="​Sectoral-Performance-Divergence-and-Opportunity​​"><a href="#​Sectoral-Performance-Divergence-and-Opportunity​​" class="headerlink" title="​Sectoral Performance: Divergence and Opportunity​​"></a>​Sectoral Performance: Divergence and Opportunity​​</h2><p>​1. <strong>​Winners: Tech and Consumer Discretionary​​</strong><br>    - The S&amp;P 500’s tech and consumer discretionary sectors soared 3.1% and 2.8%, respectively, on May 27. Companies like Shopify (+13.7% in mid-May) and Airbnb (+9%) capitalized on AI-driven efficiency gains and pent-up travel demand.<br>    - Automotive stocks rallied, with Stellantis up 6% and Toyota gaining 2%, buoyed by tariff relief and EV subsidies.<br>2. ​<strong>​Laggards: Gold and Healthcare​​</strong><br>    - Gold miners slumped as risk appetite returned: Gold Resources plunged 6%, and Newmont Corp fell 4%.<br>    - Healthcare underperformed, with the S&amp;P 500 Health Care Index flatlining amid policy uncertainty and a 62% crash in Rocket Pharma after FDA halted its gene therapy trial.</p><h2 id="​​China’s-Tech-Dilemma-Contrasting-Fortunes-in-U-S-Listings​​"><a href="#​​China’s-Tech-Dilemma-Contrasting-Fortunes-in-U-S-Listings​​" class="headerlink" title="​​China’s Tech Dilemma: Contrasting Fortunes in U.S. Listings​​"></a>​​China’s Tech Dilemma: Contrasting Fortunes in U.S. Listings​​</h2><p>Chinese ADRs faced turbulence. While the Nasdaq Golden Dragon Index dipped 0.28%, outliers like Bilibili (+2%) and Tencent Music (+2%) defied the trend. However, Pinduoduo nosedived 13.6% after missing Q1 revenue estimates (956.7billionvs.985 billion expected) and reporting a 45% profit drop. EV makers also struggled, with Nio and XPeng down over 3% amid tariff-related supply chain fears.</p><h2 id="​Looking-Ahead-Navigating-a-Fragile-Equilibrium​​"><a href="#​Looking-Ahead-Navigating-a-Fragile-Equilibrium​​" class="headerlink" title="​Looking Ahead: Navigating a Fragile Equilibrium​​"></a>​Looking Ahead: Navigating a Fragile Equilibrium​​</h2><p>The market’s trajectory hinges on three factors:</p><ol><li>​<strong>​Fed Policy</strong>​​: With rate cuts paused, Chair Powell’s June remarks will clarify whether stubborn inflation warrants prolonged tightening.<br>​​2. <strong>Earnings Momentum</strong>​​: Q2 reports, especially from AI-centric firms like Nvidia and AMD, must validate lofty valuations.<br>​3. <strong>​Geopolitical Risks​</strong>​: Escalating Middle East tensions and U.S.-China tech decoupling could reignite volatility.</li></ol><p>Goldman Sachs strategists advise a barbell approach: “Overweight AI innovators and defensive utilities, while hedging with Treasury futures against policy shocks”.</p><h2 id="​​Conclusion-Cautious-Optimism-in-a-Policy-Driven-Market​​"><a href="#​​Conclusion-Cautious-Optimism-in-a-Policy-Driven-Market​​" class="headerlink" title="​​Conclusion: Cautious Optimism in a Policy-Driven Market​​"></a>​​Conclusion: Cautious Optimism in a Policy-Driven Market​​</h2><p>May 2025 underscored the U.S. stock market’s resilience amid whiplash-inducing headlines. While tech strength and delayed tariffs fueled gains, high valuations (S&amp;P 500 P&#x2F;E ratio: 38.5x) and political unpredictability demand vigilance. As Fundstrat’s Tom Lee cautioned, “The rally’s second act requires earnings to catch up with prices—or risk a 10% correction”. For now, investors ride the updraft, but with seatbelts fastened.</p>]]></content>
    
    
    <summary type="html">The U.S. stock market witnessed a dramatic rebound in late May 2025, driven by easing trade tensions and resilient tech sector performance. On May 27, the S&amp;P 500 surged 2.05% to 5,921.54, nearing its February all-time high, while the Nasdaq Composite jumped 2.47% to 19,199.16, marking its strongest single-day gain since April.</summary>
    
    
    
    <category term="Financial" scheme="https://www.nablepart.com/categories/Financial/"/>
    
    
    <category term="Stock Market" scheme="https://www.nablepart.com/tags/Stock-Market/"/>
    
    <category term="​Trade Policy" scheme="https://www.nablepart.com/tags/%E2%80%8BTrade-Policy/"/>
    
    <category term="​Economic Data" scheme="https://www.nablepart.com/tags/%E2%80%8BEconomic-Data/"/>
    
    <category term="​Sectoral Performance" scheme="https://www.nablepart.com/tags/%E2%80%8BSectoral-Performance/"/>
    
  </entry>
  
  <entry>
    <title>Breakthroughs in Quantum Computing, AI, and Biotechnology Define 2025&#39;s Tech Landscape​</title>
    <link href="https://www.nablepart.com/u/Information-20250527/"/>
    <id>https://www.nablepart.com/u/Information-20250527/</id>
    <published>2025-05-28T21:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Quantum-Computing-Redefining-the-Boundaries-of-Computation​​"><a href="#Quantum-Computing-Redefining-the-Boundaries-of-Computation​​" class="headerlink" title="Quantum Computing: Redefining the Boundaries of Computation​​"></a>Quantum Computing: Redefining the Boundaries of Computation​​</h2><p>2025 has emerged as a pivotal year for quantum technology, with both academic and commercial milestones reshaping the field. China’s University of Science and Technology (USTC) unveiled the <strong>​​Zuchongzhi-3​</strong>​ superconducting quantum computer, boasting 105 functional qubits and outperforming classical supercomputers by ​<strong>​15 orders of magnitude</strong>​​ in solving quantum random circuit sampling tasks. This achievement, published in Physical Review Letters, solidifies China’s position in the global quantum race, closely rivaling U.S. advancements like Google’s latest Sycamore iterations.</p><!-- post_1 --><p><ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-5641491107630454" data-ad-slot="6799528827" data-ad-format="auto" data-full-width-responsive="true"></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});</script></p><p>Meanwhile, <strong>​​D-Wave</strong>​​ launched its <strong>​​Advantage2​​</strong> quantum annealer, designed for optimization and AI tasks, featuring a 40% boost in energy efficiency and 75% noise reduction. In a parallel breakthrough, Australian researchers at the University of Sydney demonstrated <strong>​​single-ion quantum simulation</strong>​​ of organic molecular dynamics, a leap toward practical quantum chemistry tools. Such innovations highlight quantum computing’s transition from theoretical promise to real-world utility, with applications in drug discovery, logistics optimization, and climate modeling.</p><h2 id="​​Artificial-Intelligence-Smaller-Models-Bigger-Impacts​​"><a href="#​​Artificial-Intelligence-Smaller-Models-Bigger-Impacts​​" class="headerlink" title="​​Artificial Intelligence: Smaller Models, Bigger Impacts​​"></a>​​Artificial Intelligence: Smaller Models, Bigger Impacts​​</h2><p>The AI landscape in 2025 is marked by the rise of <strong>​​compact</strong>, <strong>energy-efficient models</strong>​​ that challenge the dominance of large language models (LLMs). Chinese firm <strong>​​DeepSeek​​</strong> disrupted the market with ​​R1​​, an open-source model achieving performance comparable to OpenAI’s GPT-4 at a fraction of the cost ($5.57 million vs. billions). R1’s ability to run locally on edge devices enhances privacy and democratizes access, empowering schools and SMEs.</p><p>Generative AI has also evolved. <strong>​​Sora​​</strong>, OpenAI’s text-to-video model, now generates hyper-realistic 3D environments, while <strong>​​Huawei​​</strong> and <strong>​​China Mobile​</strong>​ showcased <strong>​​5G-A-enabled humanoid robots</strong>​​ performing complex tasks like precision welding and disaster response. Ethical governance remains critical, as China’s AI Technology for Good White Paper sets global standards for data security and algorithmic transparency.</p><h2 id="Biotech-CRISPR-Expands-Its-Healing-Horizon"><a href="#Biotech-CRISPR-Expands-Its-Healing-Horizon" class="headerlink" title="Biotech: CRISPR Expands Its Healing Horizon"></a>Biotech: CRISPR Expands Its Healing Horizon</h2><p>Gene editing continues its therapeutic revolution. Following the 2023 approval of <strong>​​Casgevy​​</strong> for sickle cell disease, 2025 sees ​<strong>​CRISPR-based therapies​</strong>​ targeting chronic hepatitis B, age-related macular degeneration, and autoimmune disorders. Researchers are now engineering ​​<strong>universal CAR-T cells</strong>​​ using CRISPR, enabling off-the-shelf cancer immunotherapy and reducing treatment costs by 50%.</p><p>In HIV prevention, a ​<strong>​biannual injectable drug</strong>​​ reported <strong>​​0% infection rates</strong>​​ in trials, offering hope for eradicating the virus in underserved regions. Meanwhile, <strong>​​AlphaFold3​​</strong> has mapped over 200 million protein structures, accelerating drug design and pandemic preparedness.</p><h2 id="​​Space-Exploration-A-Crowded-Moon-and-Cosmic-Insights​​"><a href="#​​Space-Exploration-A-Crowded-Moon-and-Cosmic-Insights​​" class="headerlink" title="​​Space Exploration: A Crowded Moon and Cosmic Insights​​"></a>​​Space Exploration: A Crowded Moon and Cosmic Insights​​</h2><p>The Moon is buzzing with activity. Japan’s <strong>​​iSpace​​</strong> and U.S.’s <strong>​​Intuitive Machines​</strong>​ are deploying landers to the lunar south pole, searching for water ice to support future colonies. China’s ​​<strong>Tianwen-2</strong>​​ mission, set for mid-2025, aims to return samples from asteroid <strong>​​2016 HO3</strong>​​ and study comet ​<strong>​311P</strong>​​, advancing planetary defense strategies.</p><p>Astrophysics leaps forward with NASA’s <strong>​​SPHEREx​​</strong> telescope, launching in February to survey 450 million galaxies and 100 million stars, unraveling cosmic mysteries like dark matter. The ​<strong>​Solar Wind-Magnetosphere Imager (SMILE)​</strong>​, a Sino-European satellite, will visualize solar wind interactions with Earth’s magnetic field, improving space weather forecasts.</p><h2 id="​​Green-Tech-Energy-Innovation-Meets-Climate-Urgency​​"><a href="#​​Green-Tech-Energy-Innovation-Meets-Climate-Urgency​​" class="headerlink" title="​​Green Tech: Energy Innovation Meets Climate Urgency​​"></a>​​Green Tech: Energy Innovation Meets Climate Urgency​​</h2><p>Nuclear energy is reborn. <strong>​​Small Modular Reactors (SMRs)​</strong>​ are gaining traction, with tech giants like Google and Amazon investing in nuclear-powered data centers to meet AI’s energy demands. China’s <strong>​​Hualong One​</strong>​ reactor and the <strong>​​BZ26-6​</strong>​ offshore oilfield—the world’s largest metamorphic rock reserve—underscore its energy diversification.</p><p>In agriculture, <strong>​​methane-reducing feed additives</strong>​​ for cattle have slashed emissions by 30%, while ​​sustainable aviation fuels​​ derived from waste oils and CO₂ are scaling globally. The <strong>​​COP30​​</strong> summit in Brazil will test nations’ resolve to fund these transitions.</p><h2 id="​​Neurotechnology-and-Robotics-Merging-Mind-and-Machine​​"><a href="#​​Neurotechnology-and-Robotics-Merging-Mind-and-Machine​​" class="headerlink" title="​​Neurotechnology and Robotics: Merging Mind and Machine​​"></a>​​Neurotechnology and Robotics: Merging Mind and Machine​​</h2><p>China’s <strong>​​NEO​​</strong> brain-computer interface (BCI), rivaling Neuralink, enables paralyzed patients to control robotic limbs with 95% accuracy. Meanwhile, ​​<strong>autonomous taxis</strong>​​ operate in over a dozen cities, with Baidu and Tesla vying for dominance in AI-driven mobility.</p><p>Robotics sees a paradigm shift. ​<strong>​General-purpose robots</strong>​​ trained via generative AI now adapt to dynamic environments, from factory floors to disaster zones, reducing deployment times from months to hours.</p><h2 id="​​Conclusion-A-Global-Tech-Race-Redefined​​"><a href="#​​Conclusion-A-Global-Tech-Race-Redefined​​" class="headerlink" title="​​Conclusion: A Global Tech Race Redefined​​"></a>​​Conclusion: A Global Tech Race Redefined​​</h2><p>As 2025 unfolds, the fusion of ​<strong>​quantum computing​</strong>​, <strong>​​AI ethics</strong>​​, and <strong>​​biotech breakthroughs​</strong>​ underscores a collaborative yet competitive global ecosystem. China’s strides in nuclear fusion, quantum networks, and green infrastructure signal its ambition to lead, while U.S. and EU innovations in AI governance and space exploration highlight diverse priorities. The challenge lies in ensuring equitable access to these technologies, lest the digital divide deepen into a chasm.</p>]]></content>
    
    
    <summary type="html">2025 has emerged as a pivotal year for quantum technology, with both academic and commercial milestones reshaping the field. China’s University of Science and Technology (USTC) unveiled the ​​Zuchongzhi-3​​ superconducting quantum computer, boasting 105 functional qubits and outperforming classical supercomputers by ​​15 orders of magnitude​​ in solving quantum random circuit sampling tasks.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="Quantum Computing" scheme="https://www.nablepart.com/tags/Quantum-Computing/"/>
    
    <category term="Biotechnology" scheme="https://www.nablepart.com/tags/Biotechnology/"/>
    
  </entry>
  
  <entry>
    <title>What is the tax exemption for U.S. insurance?</title>
    <link href="https://www.nablepart.com/70e0635565d1/"/>
    <id>https://www.nablepart.com/70e0635565d1/</id>
    <published>2025-04-20T05:23:31.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>In the United States, everyone knows that buying IUL insurance can be tax-free!</p><p>However, some of our friends who have just immigrated to the United States are not very clear about what taxes are exempted and why they are exempted.</p><p>Today, let’s talk about what taxes can be exempted from IUL policies in the United States!</p><!-- post_1 --><p><ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-5641491107630454" data-ad-slot="6799528827" data-ad-format="auto" data-full-width-responsive="true"></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});</script></p><h2 id="Life-Insurance-Death-Benefits"><a href="#Life-Insurance-Death-Benefits" class="headerlink" title="Life Insurance Death Benefits"></a>Life Insurance Death Benefits</h2><p>A death benefit from life insurance is a benefit from the estate, not subject to personal income tax, and subject to estate tax.</p><p>So why doesn’t life insurance count as income?</p><p>An example:</p><p>Suppose you own a house and have homeowners insurance.</p><p>If you are unfortunate enough to have your house destroyed by fire one day and the insurance company pays you the premium, should this premium be counted as your income? Of course not! Because you don’t have a house anymore, the insurance money is the money that compensates for the loss of the house.</p><p>The same applies to life insurance. Assuming that the insured is the main source of income for the family, if something happens to him, the death benefit is actually a replacement for him to continue to contribute to the family financially.</p><p>This is the logic behind the fact that the death benefit of a life insurance policy is not counted as income.</p><p>So how do you pay estate tax on a death benefit from a life insurance policy?</p><p>There are two scenarios:</p><p>1, <strong>Life insurance purchased by foreigners</strong>: Regardless of the status&#x2F;nationality of the beneficiaries, no estate tax is paid on the death benefit as long as the policyholder is a non-U.S. person. Foreigners who purchase real estate in the U.S. are subject to U.S. estate tax in the event of an accidental death, and the exemption is not $11.4 million, but $60,000 instead.</p><p>2, <strong>life insurance purchased by U.S. persons</strong>: the amount of death benefits received by the beneficiary plus all other inheritance, in the estate tax exemption within the exemption amount of the tax-free, exceeding the part of the estate tax is required to pay the estate tax, this exemption amount is often changing, in 2019 for the 11.4 million U.S. dollars. The excess is taxed at an excessively progressive rate, such as 40% of the tax rate on the excess of $1 million.</p><h2 id="Cash-appreciation-in-life-insurance-accounts"><a href="#Cash-appreciation-in-life-insurance-accounts" class="headerlink" title="Cash appreciation in life insurance accounts"></a>Cash appreciation in life insurance accounts</h2><p>Cash value appreciation in a life insurance account is exempt from personal income tax, but certain conditions must be met.</p><p>Life insurance premiums are subject to individual income tax before they are deposited. The reason that the cash value appreciation of the premiums after they are deposited into the insurance policy is tax-free is because most people use their money in the form of principal withdrawals or loans. Where the principal is taxed money, the loan is not income, so neither is subject to income tax.</p><p>When some brokers sell life insurance, some key details are not made clear, such as this point, intentionally or unintentionally vague as “withdrawals from life insurance policies are tax-free”.</p><p>You can only withdraw money from a policy that has a savings and investment function. The amount of money that can be withdrawn is determined by the cash value of the policy.</p><h3 id="Why-emphasize-policy-loans"><a href="#Why-emphasize-policy-loans" class="headerlink" title="Why emphasize policy loans?"></a>Why emphasize policy loans?</h3><p>Because the death benefit of a life insurance policy is originally left to the beneficiary after the person passes away, but now we can use it for ourselves, in fact, we are actually “borrowing” the death benefit from the beneficiary of our policy in advance to use it. The collateral for the “loan” is the cash value of the policy. This is why the amount of money that can be “borrowed” from the policy must be within the cash value of the policy.</p><p>Since it’s a loan, naturally there will be interest. But on the one hand, the money is your own, and on the other hand, since the insurance company wants to make “getting money while you are alive” as a selling point, the interest rate charged is extremely low, and in some cases, it is even zero percent.</p><p>Prior to 1980, the money put into life insurance was tax-free and no conditions were set.</p><p>Later, more and more people used insurance to avoid taxes, so exaggerated that some people actually put millions of cash into the policy at one time, but the insured amount is only a few tens of thousands of dollars, and the insured used the policy to avoid a large amount of personal income tax.</p><p>In order to cope with this situation, the IRS issued several bills after 1984, stipulating what kind of life insurance policy money is tax-free, these regulations are still in effect.</p><p>For a policy to be tax-free, two conditions need to be met:</p><p><strong>1, the amount of annual premiums paid for the policy cannot exceed the maximum amount set by the IRS, otherwise it will need to be taken out in the future and taxed on the gross proceeds.</strong></p><p>The IRS has Life Insurance Definitional Testing and MEC (modified endowment contract) testing for every life insurance policy, and those that pass the test enjoy tax exemption.</p><p><strong>What happens if a policy fails the test and becomes a MEC policy?</strong></p><p>a. Policy withdrawals are taxed by the IRS on the basis of the total appreciation in value beyond the principal amount of the policy, and the tax is calculated on the basis of the individual income tax.</p><p>b. Money withdrawn from the policy before age 59½ is subject to a 10% penalty to the IRS.</p><p>c.Once a policy is recognized as a MEC, it is a MEC for life.</p><p>d.At the time of premium payment, the policyholder has two months to adjust the payment if he or she finds that the payment amount exceeds the maximum amount that can be saved for the year. Nowadays, every policy will do this test for the customer at the proposal stage, and the broker will clearly inform the customer of the maximum amount that can be saved in a year, the recommended number of years to save, how much to save per year, what will happen if it exceeds, etc.</p><p><strong>2, The policy cannot be terminated prior to death at the time of the drawdown.</strong></p><p>If the policy is terminated before death, the IRS will consider the policy not as insurance, but as an investment.</p><p>The investment is subject to capital gains tax, which is levied on the total amount of money withdrawn by the client minus the total amount of principal invested. The tax rate is based on the long-term capital gains tax rate and is related to the tax bracket in which the insurance policy is placed in the year of termination.</p><p>Life insurance termination (Lapse), we often see this word in a plan.</p><p>For example, if you save $10,000 per year for 20 years and save a total of $200,000, and you haven’t had a chance to withdraw the money yet, and then in the 25th year, the policy Lapses, then you don’t have to pay income tax. But if you withdraw money from the policy, say for 10 years at $50,000 a year for a total of $500,000, and the policy Lapses in the 30th year, then $500 - $200 &#x3D; $300,000 is subject to capital gains tax.</p><h2 id="Premium-tax-paid-by-insurance-companies-to-their-customers"><a href="#Premium-tax-paid-by-insurance-companies-to-their-customers" class="headerlink" title="Premium tax paid by insurance companies to their customers"></a>Premium tax paid by insurance companies to their customers</h2><p>When premiums are paid, the insurance company pays a premium tax on behalf of the customer. All life insurance companies charge a fee, usually 6-8%, for each premium received, and this amount includes the premium tax charged by the states, which is about 3%.</p><p>This portion of the tax is mandatory and is paid on all life policies in the U.S. Tax rates vary slightly from state to state. Each company charges a slightly different rate and collects it in a slightly different way. Now there is some understanding of life insurance tax exemption in the United States Boo!</p>]]></content>
    
    
    <summary type="html">In the United States, everyone knows that buying IUL insurance can be tax-free</summary>
    
    
    
    <category term="Insurance" scheme="https://www.nablepart.com/categories/Insurance/"/>
    
    
    <category term="Insurance" scheme="https://www.nablepart.com/tags/Insurance/"/>
    
    <category term="US" scheme="https://www.nablepart.com/tags/US/"/>
    
    <category term="risk" scheme="https://www.nablepart.com/tags/risk/"/>
    
    <category term="insurance premium" scheme="https://www.nablepart.com/tags/insurance-premium/"/>
    
    <category term="tax" scheme="https://www.nablepart.com/tags/tax/"/>
    
  </entry>
  
  <entry>
    <title>How do I buy insurance for a car in the US?</title>
    <link href="https://www.nablepart.com/99fbc678c6ab/"/>
    <id>https://www.nablepart.com/99fbc678c6ab/</id>
    <published>2024-09-16T13:03:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>If it’s a current car that is going to be driven away the same day, the DEALER is going to need you to provide proof of insurance at the end of the transaction. The general process is:</p><ol><li>basically determine which insurance company you want to buy insurance from before you go to the dealer to finally buy the car;</li><li>go to the dealer to look at the car and decide on the price and exactly which car;</li><li>the dealer will handle all the transaction formalities, including registration and temporary license plate, at which time you are required to provide insurance;</li><li>at this time you call the insurance company directly at the dealer, or log on to the official website of the insurance company to buy insurance, provide your driver’s license information, vehicle information, and the insurance options you need, the insurance company will send the proof of insurance to the dealer within a few minutes;</li><li>The dealer receives the certificate, and after completing the procedures mentioned in 3, you can drive the car away.</li></ol><h3 id="So-how-do-you-pick-an-insurance-company"><a href="#So-how-do-you-pick-an-insurance-company" class="headerlink" title="So how do you pick an insurance company?"></a>So how do you pick an insurance company?</h3><p>The larger insurance companies in the US like GEICO, Progressive, Allstate, etc. Go directly to their website to get a quote. The quote only needs general information about the vehicle, such as the model number, but not the VIN number, so you can get a general idea of the price before you go to the dealer. For these big insurance companies, there is basically no difference in service, so just pick a relatively cheap one.</p><p>If you already have other insurance with a particular insurer, such as homeowner’s insurance, etc., you can prioritize that insurer as there are usually some discounts. However, it is still advisable to inquire about a few more.</p><p>If one is a member of AAA or costco, one can consider the insurance they are associated with as there are some discounts as well.</p><h3 id="What-kind-of-insurance-do-you-choose"><a href="#What-kind-of-insurance-do-you-choose" class="headerlink" title="What kind of insurance do you choose?"></a>What kind of insurance do you choose?</h3><p>Similar to domestic car insurance, American car insurance also has many options, basically divided into two Liability and Comprehensive.</p><p>Liability is similar to the domestic compulsory insurance, is the law requires the car on the road to purchase compulsory insurance, generally only includes the accident in the case of your fault on the other side of the compensation.</p><p>Comprehensive can be called all-risk insurance, which is to compensate both the other party’s damage and your own.</p><p>When choosing an insurance policy there are two parameters for each program that directly affect the price:</p><ol><li><p>Limit, which is the amount of insurance coverage, is the maximum amount of compensation that the insurance company will pay in a single accident.</p></li><li><p>Deductible, which is the amount of deductible or co-payment, is the portion of a single claim that is to be borne by the individual, beyond which the insurance company will pay.</p></li></ol><p>The choice of insurance and the amount depends on individual needs. For new cars, it is still recommended to choose full coverage insurance, limit choose the minimum is good, deductible can choose $1000.</p><p>In addition, if you are buying a car with a mortgage, the lender usually has higher requirements for car insurance than compulsory insurance, and the dealer will tell you before you buy.</p>]]></content>
    
    
    <summary type="html">The process of buying car insurance in the US and what to expect</summary>
    
    
    
    <category term="Liveing in U.S." scheme="https://www.nablepart.com/categories/Liveing-in-U-S/"/>
    
    
    <category term="Insurance" scheme="https://www.nablepart.com/tags/Insurance/"/>
    
    <category term="US" scheme="https://www.nablepart.com/tags/US/"/>
    
    <category term="risk" scheme="https://www.nablepart.com/tags/risk/"/>
    
    <category term="insurance premium" scheme="https://www.nablepart.com/tags/insurance-premium/"/>
    
    <category term="car insurance" scheme="https://www.nablepart.com/tags/car-insurance/"/>
    
  </entry>
  
  <entry>
    <title>The Power of Distributed Ledger Technology: What You Need to Know</title>
    <link href="https://www.nablepart.com/9fc716d9f998/"/>
    <id>https://www.nablepart.com/9fc716d9f998/</id>
    <published>2024-09-16T13:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>A blockchain is a digital ledger of transactions spread across the entire network of computers (or nodes) on the blockchain. Distributed ledgers consist of independent nodes that record, share, and synchronize transactions in different electronic ledgers instead of storing them in one centralized server. A blockchain consists of different technologies such as digital signatures, distributed networks, and encryption&#x2F; decryption methods including distributed ledger technology to facilitate blockchain applications. Blockchain is a kind of distributed ledger technology in which transactions are recorded with a permanent cryptographic signature called a hash. Therefore, distributed ledgers can also be called blockchains. But what is distributed ledger technology? In this article, we will focus on Distributed ledger technology bringing its benefits and applications.</p><h2 id="Distributed-Ledger-Technology-the-definition"><a href="#Distributed-Ledger-Technology-the-definition" class="headerlink" title="Distributed Ledger Technology, the definition"></a>Distributed Ledger Technology, the definition</h2><p>Distributed ledger technology is a digital system that records the transaction of assets by recording their details in multiple places at the same time. When compared to traditional databases, distributed ledgers lack a central data store or administration functionality. Distributed ledger technology refers particularly to the technological composition and protocols that facilitate the simultaneous access, validation, and keeping of records up to date which is the main characteristic of distributed ledgers. It functions of a computer network distributed over different multiple entities, locations, or nodes. A distributed ledger consists of nodes that process and verify every item, thereby creating a record of each item and developing a consensus on its accuracy. A distributed ledger can record different types of static and dynamic data including a registry and financial transactions. Blockchain is a typical example of a distributed ledger technology.</p><h2 id="How-Does-Distributed-Ledger-Technology-Work"><a href="#How-Does-Distributed-Ledger-Technology-Work" class="headerlink" title="How Does Distributed Ledger Technology Work?"></a>How Does Distributed Ledger Technology Work?</h2><p>distributed ledger technology focuses on the principles of decentralization. In contrast to conventional centralized databases, distributed ledger technology works on a peer-to-peer (P2P) network, where several nodes store, validate, and update the ledger at the same time. This removes the burden of a central authority and lessens the chance of a single point of failure. The process is accomplished by replicating the digital data across the network of nodes. Each node maintains an identical copy of the ledger and single-handedly processes new update transactions. To ensure consensus, all participating nodes employ a consensus algorithm that determines the correct version of the ledger. Once harmony is accomplished, the updated ledger is spread to all nodes, which ensures synchronization and efficiency. Distributed ledger technology uses cryptography to safely store data and cryptographic signatures and keys that only authorized users can access. The technology also develops a fixed database, which means the stored information cannot be erased and all updates are permanently recorded for the future.</p><p>This buildup represents an important difference in how information is gathered and communicated. It does this by moving record-keeping from an individual, authoritative location to a decentralized system in which the invited parties can view and modify the ledger. Therefore, different parties can see who uses and modifies the ledger. This is a very transparent feature of distributed ledger technology, one that provides a high level of trust among the participants and practically removes the possibility of fraudulent activities happening in the ledger. Therefore, distributed ledger technology eliminates the burden of entities using the ledger to depend on one central authority that controls the ledger or a third-party provider to handle the task of checking against manipulation.</p><h2 id="Industries-that-Use-Distributed-Ledger-Technology"><a href="#Industries-that-Use-Distributed-Ledger-Technology" class="headerlink" title="Industries that Use Distributed Ledger Technology"></a>Industries that Use Distributed Ledger Technology</h2><p>The use of distributed ledger technology goes across a wide range of industries and is changing conventional processes. Common industries that use this technology include the following.</p><h3 id="Banking-and-Finance"><a href="#Banking-and-Finance" class="headerlink" title="Banking and Finance"></a>Banking and Finance</h3><p>The banking and finance industry is a proponent of distributed ledger technology especially in the implementation of smart contracts in trade finance. Smart contracts ease the proper functioning and settlement of trade transactions, decreasing lags and removing the need for middlemen. Also, distributed ledger technology ensures quicker cross-border payments, enhances processes, and offers secure digital identity solutions.</p><h3 id="Supply-chain-management"><a href="#Supply-chain-management" class="headerlink" title="Supply chain management"></a>Supply chain management</h3><p>Distributed ledger technology is also an aspect of supply chain management. Distributed ledgers help companies to monitor and validate the movement of goods, with authenticity and no fraud. This technology facilitates real-time visibility into supply chain operations, decreases paperwork, and reduces lags. For example, companies can use a distributed ledger solution to track the origin of goods, in an ethical and trustworthy way.</p><h3 id="Healthcare"><a href="#Healthcare" class="headerlink" title="Healthcare"></a>Healthcare</h3><p>Distributed ledger technology is also useful in the healthcare industry to improve patient data management, streamline processes, and enhance security. This technology can facilitate the storing of medical records, ensuring data privacy and integrity. Also, smart contracts can automate insurance claims, minimizing administrative burdens and improving efficiency. Distributed ledger technology also makes it easy to conduct safe and transparent clinical trials, ensuring the integrity of data and building trust in the research process.</p><h3 id="Real-estate"><a href="#Real-estate" class="headerlink" title="Real estate"></a>Real estate</h3><p>The use of Distributed ledger technology can help improve the real estate industry by facilitating property transactions, decreasing paperwork, and enhancing security. Since it deals with smart contracts, companies can automate property transfers accurately providing a tamper-proof record of ownership. Blockchain platforms built on distributed ledgers are transparent, reducing the risk of fraud and disputes. It also removes the need for costly middlemen. Furthermore, this technology ensures partial ownership of real estate, creating new venture opportunities and adding liquidity to the market.</p><h2 id="What-the-future-holds-for-Distributed-Ledger-Technology"><a href="#What-the-future-holds-for-Distributed-Ledger-Technology" class="headerlink" title="What the future holds for Distributed Ledger Technology"></a>What the future holds for Distributed Ledger Technology</h2><p>With the resounding progress of distributed ledger technology, many still ask whether it can revolutionize how governments, institutions, and industries work. Many experts are promoting distributed ledger technology as an important asset that will not only revolutionize the existing processes but could soar innovative new applications. Moreover, distributed ledger technology is seen as part of the “internet of value,” where transactions occur in real-time across global networks. The reason why digital ledger technology exists is because of the pervasive internet that enables it. Nonetheless, according to experts, the adoption of this technology will follow the typical technology curve, led by a few exemplary leaders, then fast followers, and, finally, the laggards. Experts also believe that it won’t be easy for organizations to implement, scale, and operate distributed ledger technology. To that extent, enterprise executives and entrepreneurs are now faced with the burden of establishing networks that can take advantage of distributed ledger technology to revolutionize how they share and keep records.</p>]]></content>
    
    
    <summary type="html">A blockchain is a digital ledger of transactions spread across the entire network of computers (or nodes) on the blockchain</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="Banking" scheme="https://www.nablepart.com/tags/Banking/"/>
    
    <category term="Healthcare" scheme="https://www.nablepart.com/tags/Healthcare/"/>
    
  </entry>
  
  <entry>
    <title>Half vs. full coverage auto insurance in the US</title>
    <link href="https://www.nablepart.com/0e9728d9dffb/"/>
    <id>https://www.nablepart.com/0e9728d9dffb/</id>
    <published>2024-09-16T05:08:42.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>In the United States, all automobiles are required by law to carry an automobile insurance policy while on the road. Although laws vary from state to state, all require car owners to have Minimum Liability Insurance, and failure to comply with this requirement may result in fines or even suspension of the driver’s license. There are many types of auto insurance in the U.S., with semi- and full coverage being the most common types of insurance. Next, Tiger will take you to understand the specifics of half and full coverage auto insurance in the United States.</p><h2 id="I-What-is-semi-insurance-and-full-insurance"><a href="#I-What-is-semi-insurance-and-full-insurance" class="headerlink" title="I. What is semi-insurance and full insurance?"></a>I. What is semi-insurance and full insurance?</h2><p>Semi, also known as Liability Insurance, is the most basic form of automobile insurance. It is used to pay for property damage and bodily injury costs of the other party in an automobile accident. It’s important to note that this insurance typically does not cover the cost of your property and personal injury damages. Specifically, semi-insurance includes:</p><p><strong>1. Property Damage Liability</strong>: If you are found to be at fault in an accident, this insurance policy will cover the cost of repairing or replacing the other party’s property, including vehicles, fences, buildings, and signals, according to the terms of the policy.<br><strong>2. Bodily Injury Liability (BIL)</strong>: If you are found to be at fault in an accident, this insurance policy will pay for the medical expenses, rehabilitation costs, and other related medical expenses of any person injured in the other vehicle, according to the terms of the policy.</p><p>The Minimum Liability Limit varies from state to state in the U.S., but is generally between $10,000 and $50,000.</p><p>Take New York State, for example:</p><ul><li><p>Minimum $10,000 for property damage caused by a single accident.</p></li><li><p>25,000 for single person bodily injury and $50,000 for death.</p></li><li><p>Minimum $50,000 for multiple bodily injury and $100,000 for death.</p></li></ul><p>It is important to note that insurance companies generally pay a maximum of $100,000 to $500,000 for liability coverage. If an accident occurs and the maximum insurance coverage is not enough to pay the full compensation, you will have to pay the difference out of your own pocket. Therefore, Tiger suggests you to decide the maximum benefit amount according to your own property.</p><p><strong>Full Coverage Insurance is a more comprehensive type of auto insurance</strong>. It is used to pay for property damage and bodily injury costs for you and the other party after a vehicle accident. Specifically, <strong>full coverage insurance must be included</strong>:</p><ol><li><strong>Liability Insurance</strong>: If you are found to be at fault in an accident, this insurance will pay for the cost of repairing or replacing the other party’s property and related medical expenses, according to the terms of the policy.</li><li><strong>Collision Coverage</strong>: If your vehicle is damaged or destroyed in an accident with another vehicle, regardless of who is at fault, this coverage will pay for the cost of repairing or replacing the vehicle for both you and the other party, according to the terms of the policy.</li><li><strong>Comprehensive Coverage</strong>: This coverage is designed to protect your vehicle against a wide range of non-collision damages or risks, with claims paid according to specific terms. Common causes of claims include fire, theft, natural disasters, falling objects, animal collisions, and malicious damage by others.</li></ol><p>In addition to these three, car owners can also choose their own additional insurance. Common additional insurances include:</p><p><strong>Medical Payments Coverage</strong>: This coverage will pay for medical related expenses incurred by you and your passengers in a car accident, such as ambulance fees, surgical procedures, prescription drugs, medical imaging, funeral expenses, etc., according to the terms of the policy. Medical payments insurance provides compensation regardless of who is responsible for the accident. Typically with this coverage, the insurance company will pay for the medical expenses directly without you having to pay a deductible or make an advance payment.</p><p><strong>Uninsured&#x2F;Underinsured Motorist Coverage</strong>: This coverage is used to protect against a vehicle accident where the other driver is at fault but is uninsured or underinsured.</p><p><strong>Personal Injury Protection (PIP)</strong>: This coverage pays for medical, lost wages, funeral and other expenses incurred by you and your passengers in a car accident, regardless of who is responsible for the accident. In the United States, sixteen states are mandatory for car owners to purchase this insurance, Tiger suggests that you consult the specific requirements in time when shopping for this insurance content.</p><p><strong>Rental Reimbursement</strong>: This insurance is used to compensate for the cost of renting a car after an accident due to the need to repair the car, regardless of who is responsible for the accident. This type of insurance usually has a daily or per-accident maximum limit. You can choose different limits when purchasing your insurance, depending on your needs and budget.</p><p><strong>Emergency Road Assistance Service (ERAS)</strong>: This insurance is used in the event that your vehicle breaks down or becomes inoperable, the insurance company will pay for emergency services according to the terms and conditions, such as on-site repair and towing services for the owner of the broken down vehicle.</p><p>Note: All provisions of general liability insurance do not pay for the insured’s own damages. If the liability is on the other party, the other party’s liability insurance pays for you. If you are responsible, then your bodily injury and property damage will be covered by Medical Payments Coverage, Collision or Comprehensive, or your own medical insurance may be able to intervene (depending on your medical insurance). If you do not have auto insurance or medical insurance, you will have to pay for your own medical care.</p><h2 id="II-Which-one-should-I-choose-half-or-full-coverage"><a href="#II-Which-one-should-I-choose-half-or-full-coverage" class="headerlink" title="II Which one should I choose, half or full coverage?"></a>II Which one should I choose, half or full coverage?</h2><p>Choosing the right half and full coverage U.S. auto insurance for you depends on your personal needs, vehicle value, financial situation, and risk tolerance. Here are a few steps to help you make a better choice of the type of auto insurance you want:</p><p><strong>Check Legal Requirements and Minimum Requirements</strong>: you need to know the legal requirements for auto insurance in your state. Each state has different legal requirements; some states only require car owners to purchase minimum liability insurance, while others require broader coverage auto insurance.</p><p><strong>Check for Loan and Lease Requirements</strong>: If you have a loan or lease on your car, your lender or lessor will likely require you to purchase full coverage insurance. This is because they want to make sure the car is protected in the event of an accident.</p><p><strong>Assessing the value of your vehicle</strong>: If your car is new or has a high value, it is more appropriate to purchase full coverage. This is because the cost of repairing or replacing your car in the event of an accident can be high. If your vehicle is more than ten years old, cheaper or used, semi-insurance may be more appropriate. This is because it will be cheaper to repair or replace car parts in the event of an accident.</p><p><strong>Evaluate your personal situation and risk tolerance</strong>: If you are an experienced driver and can afford to pay for the damage to your vehicle, you can save a lot of money on your premiums by choosing semi-insurance. If you are new to driving or feel you cannot afford the costs associated with vehicle repair or replacement, full coverage may be more appropriate for you.</p><p><strong>Frequency of Vehicle Use</strong>: If you drive frequently and drive in areas with poor road conditions, full coverage can provide more comprehensive protection. If you drive short distances and less frequently, and the roads in your driving area are in good condition, you will save a lot of money by choosing semi-permanent insurance.</p><p><strong>Consult a professional</strong>: You can contact Tiger Tiger’s customer service to ask about auto insurance, Tiger Tiger will provide you with personalized advice and help you choose the right type of insurance and coverage.</p><p>If you want to learn more about auto insurance related content, check out our auto insurance blog posts. Tiger Tiger provides detailed information on the factors that affect the price of auto insurance , what to consider when buying auto insurance, ways to reduce the cost of auto insurance, and the US auto insurance vocabulary section!</p>]]></content>
    
    
    <summary type="html">Half vs. full coverage auto insurance in the US</summary>
    
    
    
    <category term="Liveing in U.S." scheme="https://www.nablepart.com/categories/Liveing-in-U-S/"/>
    
    
    <category term="Insurance" scheme="https://www.nablepart.com/tags/Insurance/"/>
    
    <category term="US" scheme="https://www.nablepart.com/tags/US/"/>
    
    <category term="risk" scheme="https://www.nablepart.com/tags/risk/"/>
    
    <category term="Auto Insurance" scheme="https://www.nablepart.com/tags/Auto-Insurance/"/>
    
    <category term="Half vs. full" scheme="https://www.nablepart.com/tags/Half-vs-full/"/>
    
  </entry>
  
  <entry>
    <title>5 Essential Insurances for Living in the U.S.</title>
    <link href="https://www.nablepart.com/f81dffe26ec7/"/>
    <id>https://www.nablepart.com/f81dffe26ec7/</id>
    <published>2024-09-16T01:11:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>Buying insurance is a way of risk management, pay a certain premium to the insurance company, when encountered with the contract agreed upon the possibility of accidents, the economic losses suffered by the insurance company will be able to get compensation.</p><p>Some of my friends may have had such an experience, buying insurance is too expensive and too troublesome, and then regretted that they had bought less when the insurance claim was made. In this era of ubiquitous risk, buying enough insurance is a reliable life protection for yourself and your family.</p><h2 id="The-five-most-important-types-of-insurance-to-buy"><a href="#The-five-most-important-types-of-insurance-to-buy" class="headerlink" title="The five most important types of insurance to buy"></a>The five most important types of insurance to buy</h2><h3 id="Auto-Insurance"><a href="#Auto-Insurance" class="headerlink" title="Auto Insurance"></a>Auto Insurance</h3><p>Auto insurance is a no-brainer. Almost everywhere in the U.S., there are laws mandating that drivers must have insurance before they can hit the road. If you don’t have insurance, or if you don’t have enough insurance, you will have to pay for your own medical expenses, car repairs, and everything else in case of an accident. If you are responsible for your own injuries and cause someone else to suffer some serious permanent injuries, such as paralysis and disability, the amount of long-term medical expenses and other liability compensation is very high.</p><p>Don’t leave it to chance. According to statistics, there are an average of 6 million recorded car accidents in the U.S. each year, nearly 40,000 people die in car accidents, and about 2 million are permanently injured in car accidents. According to data released by the CDC in 2010, medical and compensation costs for car accident injuries and deaths in the U.S. totaled more than $99 billion in one year.<br>So if you want to save money on auto insurance, you can consider raising your deductible or buying only half coverage if you drive an older car, but the amount of liability insurance must not be reduced, and many professionals recommend purchasing more than the total value of your own net worth of total liability insurance coverage. Of course, driving a new car with spare money partners should also consider collision insurance and comprehensive accident insurance, etc., if their car crashed to be repaired, insurance can also be reimbursed to reduce out-of-pocket expenses.</p><h3 id="Health-Insurance"><a href="#Health-Insurance" class="headerlink" title="Health Insurance"></a>Health Insurance</h3><p>As the saying goes, “Health is the greatest asset in life,” and in the United States, health insurance is indispensable. Most regular health insurance policies allow you to get regular checkups and vaccinations for free or at a very low cost, and also reimburse you for most of your medical expenses when you are sick, so that you can afford to pay for them.<br>There was once a study that said the number one reason Americans go broke is medical expenses →_→</p><p>The United States medical cost is very expensive, see an ordinary family doctor will be two or three hundred knives, if a blood test or do a special examination, casually is a few hundred, to see a specialist is even more expensive; not to mention the cost of surgery and hospitalization, thousands are idle. The average working family is not far from bankruptcy if they get sick once.<br>So even though buying health insurance itself isn’t cheap, many people would rather have the most basic insurance than nothing at all. Many young people in the United States can’t afford regular medical insurance premiums, so they will buy a cheaper Short Term Health Insurance or Hospital Indemnity Insurance, at least in case of a serious illness, there will be insurance to help reimbursement.<br>Luckily, many employers also provide health insurance benefits, which can save the average working class a lot of insurance costs. Therefore, when looking for a job, in addition to the salary, you must also consider whether there are other benefits such as health insurance, which is also a hidden wealth that should not be underestimated.</p><h3 id="Home-Owner’s-Renter’s-Insurance"><a href="#Home-Owner’s-Renter’s-Insurance" class="headerlink" title="Home Owner’s&#x2F;Renter’s Insurance"></a>Home Owner’s&#x2F;Renter’s Insurance</h3><p>In the U.S., if you have a loan to buy a house, the bank will force you to buy Home Owner’s Insurance. In case of fire or wind damage to the house, or if the house is stolen or damaged, or if the homeowner is sued and needs to compensate other people and so on, the homeowner’s insurance will pay for the homeowner’s losses, and it also protects the homeowner’s ability to continue to repay the loan.<br>Even if it is not mandatory, homeowner’s insurance is a wise investment. This is because in addition to covering the construction of the house, homeowner’s insurance usually covers claims for damage to the homeowner’s personal belongings. For example, if your car is smashed or your laptop is stolen from your car, the comprehensive accident insurance of your auto insurance will only cover the cost of repairing the car, but not the personal belongings such as the laptop, whereas homeowner’s insurance will cover such losses.<br>In addition, homeowner’s insurance usually includes homeowner’s liability insurance. For example, if someone falls and gets hurt on your property, homeowner’s insurance will reimburse the medical and related expenses; or if you are sued for damages to someone else’s property, homeowner’s insurance will pay for the costs of the lawsuit as well as the compensation.</p><p>However, homeowners should also be aware that general homeowners insurance does not cover damages caused by natural disasters such as floods, earthquakes, and landslides, so it is best to consider purchasing additional insurance if your home is located in a high-risk area where these types of natural disasters are common.<br>Although renters don’t have to worry about damage to their home, renters insurance will cover personal belongings and personal liability, as well as reimbursement for the cost of renting a hotel or other place to stay if your apartment is temporarily uninhabitable due to a fire, for example. Renters insurance in the United States is only $187&#x2F;year on average, usually only $15~$30&#x2F;month, less to go out to eat on the affordable, or very cost-effective.</p><h3 id="Disability-Insurance"><a href="#Disability-Insurance" class="headerlink" title="Disability Insurance"></a>Disability Insurance</h3><p>There are unpredictable winds and clouds, if an accidental injury, serious illness and so on, resulting in not being able to go to work without a salary, it is likely to affect the normal life (including payment of rent, mortgage, utilities and other daily expenses, etc.), to buy disability insurance for when you lose the ability to work, the insurance can be another way to protect your income.<br>Although the U.S. social security system is relatively complete, Workers’ Compensation only insures work-related injuries, Unemployment regardless of the incapacity to work, apply for Social Security Disability Insurance (SSDI) generally need to wait for at least 5 months, and the approval is very strict and easy to be rejected, and even if the successful applicant has been rejected, it is easy to be rejected. Even if the application is successful, the average benefit in 2018 is $1,197&#x2F;month, which is far from making up for the normal income.<br>Disability insurance is mainly divided into Short-Term (referred to as STDI) and Long-Term (referred to as LTDI) two categories, most of which will be paid out at about 60% of the pre-tax income, because individuals who buy this kind of insurance, the benefits are usually tax-free, so it’s almost the same as the money that actually comes to your hand after deducting taxes.</p><p>Short-term disability insurance generally pays out for a shorter period of time, about three months to a year, but can start receiving benefits sooner (usually after 1 to 14 days), while long-term disability insurance has a longer waiting period to start receiving benefits, usually after 90 days, but keeps paying out for a longer period of time, which can be several years or even until retirement. The cost of disability insurance is usually about 1% to 3% of your original income, depending on the terms of the policy, and usually waiting times, benefit percentages, and benefit periods are optional.<br>Many employers also offer disability insurance benefits, so guys might want to check with HR. However, it is important to note that if your employer purchases the insurance, or if you choose to purchase the insurance with pretax income, you will be required to pay taxes in the event that you need to apply for benefits. In addition, according to the Society for Human Resource Management (SHRM), there are five states in the U.S. that provide or require employers to provide short-term disability insurance: California, Hawaii, New Jersey, New York, and Rhode Island, so you can check the official website for details.</p><h3 id="Life-Insurance"><a href="#Life-Insurance" class="headerlink" title="Life Insurance"></a>Life Insurance</h3><p>Death is not cheap in the U.S. According to GoBankingRates.com, the average death necessity expense in the U.S. is $11,618, and the median funeral out-of-pocket expense (which includes the cost of viewing and cremation of the remains) is $7,360.If you are the primary source of income for your family, that touch wood says that in case of XXX, your family will likely be in a financially difficult If you are the main source of income for your family, then touch wood says that in case of XXX, your family will probably be in financial difficulties, and purchasing a life insurance policy will give your loved ones a protection.<br>There are many different types of life insurance, but they can be divided into two basic categories: Term Life Insurance and Whole Life Insurance.</p><p>The so-called term life insurance, that is, in the insurance contract specified period (Term, usually ranging from 5-30 years), to provide insurance protection for the insured, beyond the contract period, nothing; while whole life insurance is not specified in the contract period, will remain in force until the death of the insured, and whole life insurance is usually accompanied by a deposit investment function, can accumulate the cash value, you have the need to “borrow” the cash amount to use. You can “borrow” the cash amount when you need it.<br>Life insurance premiums depend on the age and health of the insured, and whole life insurance can be several times more expensive than term insurance for the same amount of coverage for the same person. However, term life insurance is usually affordable, so it’s worthwhile to purchase a term life insurance policy when you don’t have a lot of money to spare.</p><p>Especially if you are the breadwinner of your family and are making payments on your home and car, it is important that you have a life insurance policy that lasts long enough (e.g., to cover the entire term of your loan, or until retirement age) and has enough coverage to pay off all of your debts, as well as your family’s future living expenses.</p><p>Of course, if you win a big lottery, your family is very wealthy, and your family will not be under much financial stress, then life insurance may not be necessary.</p><h2 id="Five-of-the-worst-insurance-policies-to-buy"><a href="#Five-of-the-worst-insurance-policies-to-buy" class="headerlink" title="Five of the worst insurance policies to buy"></a>Five of the worst insurance policies to buy</h2><h3 id="Flight-Insurance"><a href="#Flight-Insurance" class="headerlink" title="Flight Insurance"></a>Flight Insurance</h3><p>Flying can actually be considered one of the safest and most secure ways to travel, and it’s a bit of a waste to spend money on insurance for such rare accidents, and you’re already covered if you have life insurance.</p><h3 id="Mortgage-Protection-Insurance"><a href="#Mortgage-Protection-Insurance" class="headerlink" title="Mortgage Protection Insurance"></a>Mortgage Protection Insurance</h3><p>This type of insurance is similar to term life insurance, but claims are limited to paying off the mortgage, unless you’ve been denied coverage by the insurance company due to a health condition, in which case it would be more cost-effective to buy term life insurance outright.</p><h3 id="Cancer-Disease-Insurance"><a href="#Cancer-Disease-Insurance" class="headerlink" title="Cancer&#x2F;Disease Insurance"></a>Cancer&#x2F;Disease Insurance</h3><p>It’s much more worthwhile to spend the money on a good health insurance policy, and this kind of insurance that only covers certain diseases is quite unnecessary.</p><h3 id="Credit-Card-Insurance"><a href="#Credit-Card-Insurance" class="headerlink" title="Credit Card Insurance"></a>Credit Card Insurance</h3><p>Although there are a lot of ID Theft issues out there right now, it generally doesn’t cost you much actual money when it comes to identity theft issues as long as you report it properly. Because federal law states that if a credit card is stolen an individual is liable for up to $50, and many banks have their own Zero Fraud Liability, it won’t even cost you a dime as long as you report it in a timely manner. And as far as repairing your credit goes, these are completely free to do on your own, it just takes some time mentally.</p><h3 id="Child-Life-Insurance"><a href="#Child-Life-Insurance" class="headerlink" title="Child Life Insurance"></a>Child Life Insurance</h3><p>As mentioned earlier, buying life insurance is mainly for people who are the breadwinners of the family to buy protection for their family members, and there’s really no need to buy insurance for small children who don’t make any money; even though whole life insurance has some investment preservation features, it’s return on investment can be a lot smaller than the return on accounts such as 529 Plans, IRAs, and so on. However, if your child is a starlet and contributes to your family’s income, it’s worth it to get an insurance policy for your child.</p>]]></content>
    
    
    <summary type="html">5 Essential Insurances for Living in the U.S. Do you have all of them?</summary>
    
    
    
    <category term="Liveing in U.S." scheme="https://www.nablepart.com/categories/Liveing-in-U-S/"/>
    
    
    <category term="Insurance" scheme="https://www.nablepart.com/tags/Insurance/"/>
    
    <category term="US" scheme="https://www.nablepart.com/tags/US/"/>
    
    <category term="risk" scheme="https://www.nablepart.com/tags/risk/"/>
    
    <category term="insurance premium" scheme="https://www.nablepart.com/tags/insurance-premium/"/>
    
    <category term="Auto Insurance" scheme="https://www.nablepart.com/tags/Auto-Insurance/"/>
    
    <category term="Health Insurance" scheme="https://www.nablepart.com/tags/Health-Insurance/"/>
    
    <category term="Home Owner&#39;s/Renter&#39;s Insurance" scheme="https://www.nablepart.com/tags/Home-Owner-s-Renter-s-Insurance/"/>
    
    <category term="Disability Insurance" scheme="https://www.nablepart.com/tags/Disability-Insurance/"/>
    
    <category term="Life Insurance" scheme="https://www.nablepart.com/tags/Life-Insurance/"/>
    
  </entry>
  
  <entry>
    <title>Why does TCP need three handshakes?</title>
    <link href="https://www.nablepart.com/aecff306d5d7/"/>
    <id>https://www.nablepart.com/aecff306d5d7/</id>
    <published>2024-03-06T13:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>The TCP three handshake protocol is designed to reliably establish a connection in an unreliable Internet environment, and the three handshakes ensure that both ends are capable of sending and receiving.So why three instead of two or four handshakes?</p><p><strong>Why not a second handshake?</strong></p><p>In the case of a quadratic handshake, where the client sends a SYN to the server and the server replies with a SYN-ACK to the client, the connection is established at that point.</p><p>In this case, if the first SYN request is delayed in the network and the connection is established after the client resends the SYN, then when the delayed SYN request reaches the server, the server assumes that it is a new connection request, and at that point the client pays no attention to the server’s response, causing the server to keep on waiting and wasting resources.</p><p><strong>Why not four handshakes?</strong></p><p>Four handshakes add additional latency and complexity, and a fourth handshake doesn’t provide any additional assurance that three handshakes have solved the problem.</p><p>Three handshakes are already able to confirm that the sending and receiving capabilities of both parties are normal, and further confirmations would only add a round trip time and reduce the efficiency of establishing a connection.</p><p><strong>Three handshakes to establish a connection:</strong></p><ol><li><strong>Client sends SYN</strong>: The client selects a random sequence number x to send a SYN message and enters the SYN_SENT state.</li><li><strong>Server sends SYN-ACK)</strong>: the server receives the SYN message, selects its own sequence number y, and sends a SYN-ACK message, and the server enters the SYN_RCVD state.</li><li><strong>Client sends ACK</strong>: the client receives the SYN-ACK message, sends an ACK message, and then enters the ESTABLISHED state.</li></ol><p><img src="https://github.com/youngjuning/www.nablepart.com/assets/13204332/91259cb2-e614-40bd-8522-48c261a533ba" alt="image"></p><p><strong>Four hand waves after the end of a data transmission:</strong></p><p>TCP connection termination, on the other hand, requires four hand waves, this is because a TCP connection is full duplex, i.e. both sides of the communication can send and receive information at the same time. When terminating a connection, each direction needs to be closed individually, so four waves are required.</p><ol><li><strong>Client sends FIN</strong>: the client decides that the data has been sent and sends a FIN message.</li><li><strong>Server ACK</strong>: the server receives this FIN message, sends an ACK message to acknowledge it, and enters the CLOSE_WAIT state.</li><li><strong>SERVER SEND FIN</strong>: The server sends a FIN message when it is ready to close the connection.</li><li><strong>Client ACK</strong>: When the client receives the FIN, it sends an ACK message and then enters the TIME_WAIT state. After a period of time to ensure that the server receives the ACK message, the client closes the connection.</li></ol><p><img src="https://github.com/youngjuning/www.nablepart.com/assets/13204332/cc1930c6-14ba-40ed-afb8-53a38e96215f" alt="image"></p><p><strong>Life example:</strong></p><p>You can compare the three handshakes to a telephone conversation. When you dial a phone number, the other person answers (the first handshake), you start greeting each other to make sure the other person can hear you (the second handshake), and then you start a conversation (the third handshake).<br>If you greet each other only once, you may not be sure the other person actually heard you; if you greet each other multiple times, it’s redundant and inefficient.</p><p>At the end of the call, you hang up when you say “goodbye” (the first wave), and the other person hangs up when they say “goodbye” (the second wave), thus ensuring that both parties understand that the call is over.</p><p>At the end of a phone conversation:</p><p>You first say “Do you have anything else, no I’m going to hang up” (the first wave), waiting for the other party to respond, which is equivalent to sending a FIN packet.</p><p>The other party responds “Let me think about what else” (second wave), which is equivalent to the other party sending an ACK packet, but the other party may still have some things to deal with, so the call is not immediately ended.</p><p>After a while, the other party confirms, “OK, I have nothing more to say, so hang up” (third wave), then the other party sends a FIN packet.</p><p>You respond with “Got it, hang up” (the fourth wave of the hand), corresponding to the ACK packet, after which both parties can hang up and end the call.</p>]]></content>
    
    
    <summary type="html">The TCP three handshake protocol is designed to reliably establish a connection in an unreliable Internet environment, and the three handshakes ensure that both ends are capable of sending and receiving. So why three instead of two or four handshakes?</summary>
    
    
    
    <category term="Programming" scheme="https://www.nablepart.com/categories/Programming/"/>
    
    
    <category term="TCP" scheme="https://www.nablepart.com/tags/TCP/"/>
    
    <category term="Network" scheme="https://www.nablepart.com/tags/Network/"/>
    
  </entry>
  
  <entry>
    <title>AI-Assisted R&amp;D Trends for 2024: From R&amp;D Digitization to AI + Dev Tools 2.0, More Than Copilot</title>
    <link href="https://www.nablepart.com/f954a31fc80e/"/>
    <id>https://www.nablepart.com/f954a31fc80e/</id>
    <published>2024-03-05T02:56:12.000Z</published>
    <updated>2025-08-25T09:00:39.786Z</updated>
    
    <content type="html"><![CDATA[<p>In the last year, there have been a number of companies that have landed generative AI on their toolchains, combining our analysis of these companies with recent “new technology” trends in China, such as the initial rise of native apps in Yu Hongmeng. From these cases and trends, we also see some new possible directions.</p><p>Combining our three-phase framework at LLM as-Copilot, LLM as-Integrator, LLM as-Facilitator, and our internal analysis materials, I would roughly summarize them into 6 trends:</p><ol><li>from single-role assistance to end-to-end assistance.</li><li>Knowledge Management for Assisted Decision Making.</li><li>DevOps facilities for AI applications.</li><li>online fault localization and problem solving.</li><li>influx of AI-assisted UI design. 6. code translation vs.</li><li>code translation and inter-system translation.</li></ol><p>Some of this knowledge is almost as much as we’ve previously reached agreement on, so let’s tell the story in reverse.</p><h2 id="Digitalization-of-R-D-forced-by-generative-AI"><a href="#Digitalization-of-R-D-forced-by-generative-AI" class="headerlink" title="Digitalization of R&amp;D forced by generative AI"></a>Digitalization of R&amp;D forced by generative AI</h2><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202403060057528.jpeg"></p><p>Before we start the new trend summary, one thing we have to mention is: the digitization of R&amp;D. Over the past year, I’ve talked to almost 10 companies’ R&amp;D leaders about AI-assisted R&amp;D. In fact, the reason that prevents most companies from applying generative AI, in addition to model limitations, is the poor level of digitization of R&amp;D.</p><p>The first problem we have to face is that <strong>standardization is not on the ground</strong>. Simply put, standardization, platform, indicator-driven four maturity levels to consider the problem, some organizations are still in the standardization of the difficult problem of landing, not to mention indicator-driven improvement. Fortunately generative AI combined with tools can improve the problem of hard to land on specification, which is a potential opportunity to bend the curve - provided there is enough gumption to push forward.</p><p>In addition to this, the second problem we have to face is: <strong>Management of Knowledge</strong> - there is a lot of unspeakable knowledge in the organization (askew, e.g. content gossip). The challenges we will encounter are:</p><ul><li>Undocumented, unmanifested.</li><li>Lots of outdated knowledge - you don’t know which document is old.</li><li>A lot of non-textual knowledge - a conference whiteboard shot one day where the words are unrecognizable.</li></ul><p>Simply put, these are part of our knowledge debt.</p><h2 id="Code-translation-and-inter-system-translation"><a href="#Code-translation-and-inter-system-translation" class="headerlink" title="Code translation and inter-system translation"></a>Code translation and inter-system translation</h2><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202403060057141.jpeg"></p><p>Scenario 1: <strong>Legacy System Migration</strong>. Generative AI features are well expressed in natural language translation and very prominent in programming languages. So, last year we also did the analysis of related features in AutoDev and built a series of related legacy system functions. In commercial products, we can also see such as IBM watsonx Code Assistant for Z such as Cobol to Java specialized tools.</p><p>And how to analyze the legacy system migration remains a complex issue. Existing tools are more often designed by people to migrate, assisted by AI.</p><p>Scenario 2: <strong>Inter-system translation</strong>. As, more and more large manufacturers start to develop Hongmeng applications, we have found in practice the advantages of generative AI in this regard. Since the UI differences of mobile systems are not that big, some of the functions can be migrated through translation. Although, we encountered a large number of generative AIs that lacked new proprietary knowledge (ArkUI, ArkTS, HarmonyOS APIs), a combination of combining <strong>Thought Chain</strong> and <strong>RAG</strong> with them could achieve more acceptable results.</p><h2 id="The-Emergence-of-AI-Assisted-UI-Design"><a href="#The-Emergence-of-AI-Assisted-UI-Design" class="headerlink" title="The Emergence of AI-Assisted UI Design"></a>The Emergence of AI-Assisted UI Design</h2><p>AI generative code needs to incorporate information such as existing specifications in order to generate code that works. For back-end code development, where Spring reigns supreme, building this generative AI-friendly architecture is an easy task. However, generative AI is challenging in the front-end space as small, medium, and large organizations have their own brand guidelines, style guides, and design systems.</p><p>From the existing models, the main AI-assisted UI designs can be categorized into three types:</p><ol><li>prototype generation to assist in demand communication.</li><li>UI design generation combined with low-code platforms. 3.</li><li>UI code generation combined with IDE plug-ins.</li></ol><p>Considering the complexity of front-end requirements, it is obvious that it would be easier to start from the second scenario, while scenario 3 is more suitable for novices to learn and use the framework, and developers to use new frameworks.</p><h2 id="Online-fault-localization-and-problem-solving"><a href="#Online-fault-localization-and-problem-solving" class="headerlink" title="Online fault localization and problem solving"></a>Online fault localization and problem solving</h2><p><strong>Online Issue Fix</strong>. Before generative AI, conventional deterministic AI has enabled a great deal of automation. Conventional application performance monitoring (APM) tools can map errors reported by online runtimes to the corresponding faulty code. ps: Combined with information about the correlation between the requirements and the code, we can accurately deduce which requirement change caused the impact. With generative AI, online problems can be converted to fixing PRs to assist you in fixing the problem, such as NewRelic, which has a similar feature.</p><p><strong>Fault localization</strong>. Troubleshooting networks and problems becomes incredibly important in complex systems that contain a large number of subsystems, such as a single microservice. In the absence of tools, humans often lose critical information at some point, and AI can assist us in solving such problems, such as AWS’s AI-assisted network troubleshooting.</p><p>Considering I’m an expert in Dev, not Ops, I can’t read much more into this.</p><h2 id="DevOps-facilities-for-AI-applications"><a href="#DevOps-facilities-for-AI-applications" class="headerlink" title="DevOps facilities for AI applications"></a>DevOps facilities for AI applications</h2><p>There are already a large number of online applications that have introduced AI capabilities, such as the Starbucks face-swap campaign, and these types of AI applications have introduced a series of AI infrastructures. Therefore, for medium to large organizations, in addition to considering the appropriate private deployment model, you also need to build a rapid AI DevOps infrastructure to support it.</p><p>In addition to the various types of monitoring of the big models themselves, we also need the operational costs of the models themselves – especially after you call third-party APIs to build a better AiBizDevFinGitSecOps system (🐶🐶🐶🐶). Naturally, we need an AI to advise you on your AI + Finance, such as building caching mechanisms, Prompt length optimization, and so on.</p><h2 id="Knowledge-Management-for-Decision-Making"><a href="#Knowledge-Management-for-Decision-Making" class="headerlink" title="Knowledge Management for Decision Making"></a>Knowledge Management for Decision Making</h2><p>Knowledge management used to be a headache, now it has become a full-body pain (for lack of a better word). I’m sure you readers already understand generative AI very well:</p><ul><li>If you do not give him enough information, it generates the results can be accepted by luck.</li><li>If you give him enough information, it will always ignore some important information to make you angry.</li></ul><p>Angry or not, when you start thinking about landing, you start assuming that when I have an architectural specification, generative AI can assist will make architectural decisions. Then, you realize that you can’t find an architectural specification that meets the requirements. Similarly, in other scenarios, there are similar problems.</p><p>PS (a twist): So, you should consider prioritizing knowledge management as well, so that when you report back with your leaders, you can reasonably dump the pot.</p><h2 id="From-single-role-assistance-to-end-to-end-assistance"><a href="#From-single-role-assistance-to-end-to-end-assistance" class="headerlink" title="From single-role assistance to end-to-end assistance"></a>From single-role assistance to end-to-end assistance</h2><p>In fact, most of the above is about how AI can switch from single-role assistance to end-to-end assistance, just with requirements from different scenarios.</p><p>The difficulty with end-to-end assistance is not the design of the tool or the prompt itself, but whether the processes and specifications are in place. If there are problems with the processes and specifications themselves, then we need to explore whether there are more appropriate strategies from different scenarios.</p><h3 id="Other-and-AI-summaries"><a href="#Other-and-AI-summaries" class="headerlink" title="Other and AI summaries"></a>Other and AI summaries</h3><p>Of course, there are AI-assisted R&amp;D scenarios such as instant assisted problem fixing that we’ve discussed in the past.</p><p>This article looks at trends in AI-assisted R&amp;D in 2024, with a particular emphasis on the evolution of AI technology from simply assisting a single role to end-to-end assistance. The author begins by mentioning the importance of R&amp;D digitization in AI adoption and points out the challenges of standardization and knowledge management. He then details six major trends:</p><ol><li><strong>From single-role assistance to end-to-end assistance</strong>: the AI technology is no longer limited to single-role assistance, but extends to all aspects of the entire R&amp;D process.</li><li><strong>Knowledge Management for Decision Aid</strong>: the application of AI in knowledge management has become more important, but also faces the problem of incomplete information and information selection.</li><li><strong>DevOps facilities for AI applications</strong>: The introduction of AI applications requires the establishment of adaptable DevOps infrastructure to support their operation and monitoring.</li><li><strong>Online Fault Location and Problem Solving</strong>: The application of AI in online fault location and problem solving is also becoming mature, which can help quickly locate problems and provide solutions.</li><li><strong>Emergence of AI-assisted UI design</strong>: The application of AI in UI design has taken various forms, including assisting in demand communication, UI design generation on low-code platforms, and UI code generation by IDE plug-ins.</li><li><strong>Code Translation and Inter-System Translation</strong>: The application of AI in code translation and inter-system translation is gradually maturing, especially in the performance of legacy system migration and inter-system function migration.</li></ol>]]></content>
    
    
    <summary type="html">In the last year, there have been a number of companies that have landed generative AI on their toolchains, combining our analysis of these companies with recent &quot;new technology&quot; trends in China, such as the initial rise of native apps in Yu Hongmeng. From these cases and trends, we also see some new possible directions.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="china" scheme="https://www.nablepart.com/tags/china/"/>
    
    <category term="ai" scheme="https://www.nablepart.com/tags/ai/"/>
    
    <category term="Hongmeng" scheme="https://www.nablepart.com/tags/Hongmeng/"/>
    
  </entry>
  
  <entry>
    <title>The mystery of the expulsion of Ultraman, the &quot;father of ChatGPT&quot;</title>
    <link href="https://www.nablepart.com/b7cdfc5441ea/"/>
    <id>https://www.nablepart.com/b7cdfc5441ea/</id>
    <published>2023-11-23T11:45:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>On 17 November local time, the official announcement of OpenAI, an American artificial intelligence research company, announced that Sam Altman will step down as CEO and quit the board of directors.</p><p>The announcement was quite harshly worded: “Mr Altman’s departure is the result of a considered decision by the Board of Directors, which believes that Mr Altman has not been sufficiently forthcoming in his communications with the Board of Directors and has impeded the Board’s ability to fulfil its responsibilities. The Board no longer believes he is capable of continuing to lead OpenAI.”</p><p>From a post by Greg, OpenAI’s chairman and president, the firing went like this: on Thursday night, Altman received a text message from Ilya, the chief scientist, about a meeting on Friday at noon. At noon on Friday, Altman accessed the meeting, the rest of the board, except Greg, were there, and Altman was told he was fired. At 12:19 p.m. Greg receives a call from Ilya, and at 12:23 p.m. Greg accesses the meeting and is told that he has been removed from the board of directors and continues to stay in management, and that Altman has been fired.</p><p>At the end of November last year, with the shiny introduction of ChatGPT, a wave of artificial intelligence swept the world, and Altman was honoured as the “father of ChatGPT”. A year later, he was fired, quite dramatic.</p><p>On November 6, Altman announced a series of product updates to developers and ChatGPT users around the world at OpenAI’s inaugural developer conference, including the ability to quickly create custom ChatGPT GPTs to realise everyone’s dream of owning a big model, and said he would compete with the likes of Microsoft and Google.</p><p>On Thursday, Altman also spoke at APEC, expressing bullishness on AI technology, arguing that the technology doesn’t need major regulation in the short term, and even claiming, “As a species, humanity is now on a path to self-destruction, and AI may be the means to stop it.”</p><p>And then suddenly he was “out of office” on Friday.</p><p>At 5:46 a.m. on November 18, Altman responded with a message on the X platform, “I loved my time at OpenAI, it changed me and hopefully it will change the world. Most importantly I have worked with so many talented people. I’ll explain more about what’s next later.”</p><p>OpenAI CTO Mira Murati, formerly CTO, has been named interim CEO.At the same time, company president Greg Brockman announced that he would be stepping down as chairman of the board of directors, but that he would continue to hold a position at the company, reporting directly to the CEO.</p><p>The board described interim CEO Mulati as technically well-rounded and particularly experienced in AI management and policy implementation. Previous reports have called her “a dynamic and conciliatory figure in AI” who has welcomed government regulation.</p><p>On Friday, she told employees at a company-wide meeting that Microsoft’s relationship with OpenAI remains stable following Altman’s departure. By the news of Altman’s departure, Microsoft shares late in the session once “diving” fell more than 2%, closing down 1.68%. Microsoft issued a statement that the company has a long-term relationship with OpenAI, abide by the investment commitment to OpenAI.</p><p>As you can see, Altman was ousted from the board for at least two reasons:</p><ul><li><p>One is that he disagrees with the board on the issue of how to handle the relationship with Microsoft.</p></li><li><p>The second is a disagreement with the board on the question of whether regulation of AI should be prevented in the short term.</p></li></ul><h2 id="I-Begging-for-a-package-or-independence"><a href="#I-Begging-for-a-package-or-independence" class="headerlink" title="I. Begging for a package or independence?"></a>I. Begging for a package or independence?</h2><p>Microsoft is by far the largest investment in OpenAI, after investing $1 billion in 2019, and another $10 billion in early 2023.</p><p>Microsoft and OpenAI were once called a “romantic alliance” in the tech world. However, if you look closely, you will find that the shrewd Microsoft has not lost its mind in the face of “romance”. 2019 investment of $1 billion in OpenAI, the conditions are very favourable to Microsoft, buy out the exclusive license of the basic technology of the GPT-3, most of the technology of OpenAI is preferentially licensed to the Microsoft products, and Microsoft has become the exclusive cloud provider of OpenAI, and Microsoft has become the exclusive cloud provider of OpenAI. OpenAI’s exclusive cloud provider, meaning<strong>a large portion of the amount of Microsoft’s investment that OpenAI received was to be paid to Microsoft as a cloud usage fee</strong>.</p><p>Last November, OpenAI released the ChatGPT chatbot, which took the world by storm within a few weeks. At the beginning of February this year, ChatGPT turned on a wild ride, and a new global AI race was triggered, with giants competing for the top spot. Microsoft immediately ChatGPT access to Microsoft cloud services, the launch of GPT-3, GPT-4 version of the Office Family Bucket and a series of new products, especially Microsoft ChatGPT into its search engine to launch a new version of the Bing artificial intelligence search engine (Bing Chat), industry insiders believe that this is likely to help Microsoft arm in arm, in the search market to conquer the city.</p><p>Every time a new technology emerges, it’s time for a major industry reshuffle. Although in the global search market in October 2023, Google occupied 91.55% of the market share, while Bing was only 3.11%, which is still a slight decline from last year, but artificial intelligence, a new technological wave hit over, the uncertainty of the game will be greatly increased, which is a huge challenge for the leader Google, while for the follower of Microsoft is a huge opportunity.</p><p>Microsoft obviously does not want OpenAI to provide products and services for other customers, the most favourable situation for it is OpenAI only as an enabler behind it, do not come out to the foreground to receive customers. At the same time, Microsoft has other enablers, partners, to ensure flexibility. It’s like a romantic relationship where both parties want to have many lovers and the other party has only one lover of their own.</p><p>A Microsoft internal document in March this year shows that in order to persuade customers to choose their own Azure OpenAI service, Microsoft secretly ordered its sales staff to “pull the pedal” OpenAI. when Meta re-released the open source large language model LIama 2, Microsoft announced that it had become the first partner of Llama 2.</p><p>Altman is obviously not willing to accept such a “adopted” role, and is also frequently contacting new “gold masters”.</p><p>Just when he was confident that he would survive on his own, OpenAI’s board of directors staged a “coup” and drove him out of the company overnight.</p><h2 id="II-A-for-profit-company-controlled-by-a-non-profit-organisation"><a href="#II-A-for-profit-company-controlled-by-a-non-profit-organisation" class="headerlink" title="II. A for-profit company controlled by a non-profit organisation"></a>II. A for-profit company controlled by a non-profit organisation</h2><p>While Microsoft has made a large investment in OpenAI, it is investing in OpenAI Global, which was founded in 2019 and is “fully controlled” by the OpenAI non-profit organisation, which was incorporated in Delaware in 2015.</p><p>OpenAI Global is “permitted to earn and distribute profits” subject to the mission espoused by the parent organisation, OpenAI, which has always emphasised Microsoft’s “acceptance of our cap-and-trade offer, and the fact that we are keeping the AGI technology and governance retained in the hands of the non-profit and indeed all of humanity”.</p><p>In an interview in January 2023, Altman also said, “The future I would like to see is one where access to AI is super-democratised, and there are multiple AGIs in the world that can help people form multiple perspectives without making anyone too powerful.”</p><p>Under Microsoft’s investment agreement with OpenAI, Microsoft’s share of the profits gained by OpenAI Global is limited to “gains made prior to the realisation of AGI (Artificial General Intelligence)” as set out in OpenAI’s charter.</p><p>“AGI” is defined as “a highly autonomous system capable of outperforming humans in the most economically valuable tasks.” So who decides whether or not to “realise AGI”?</p><p>The six members of OpenAI’s board of directors, known as the “Open AI Six”: Chairman and President Greg Brockman, Chief Scientist Ilya Sutskever, Chief Executive Officer Sam Altman, and three independent directors who are not OpenAI employees, Adam D’Angelo, Tyler D’Angelo, and Tyler D’Angelo. Angelo, Tasha McCauley and Helen Toner.</p><p>If they believe that AGI has been “realised”, then Microsoft no longer has the right to make a profit. While it is rare for a non-profit board to decide whether shareholders of a for-profit company will receive a profit, OpenAI is unique in that it was founded by Altman and others as a non-profit organisation with a mission to ensure safe general artificial intelligence (GAI) for the benefit of mankind.</p><p>They saw the non-profit board as focused on benefiting all of humanity, while the for-profit board served investors. They have also endeavoured to build in another layer of security, which is to keep the board majority independent, i.e., a majority of the members do not hold equity in OpenAI Global.</p><p>It is argued that there are indeed some non-profit organisations that own shares in for-profit organisations, most notably the Hershey Trust. But they are in full control of the for-profit companies under them, with no opposition from minority shareholders. In the case of OpenAI, Microsoft’s for-profit interests could directly conflict with the nonprofit interests of the controlling entity.</p><p>Once OpenAI achieves its proposed AGI mission, Microsoft fears it will be cut out of the picture. But at last week’s OpenAI Developer Day event, Altman assured Microsoft CEO Satya Nadella: “I highly appreciate the partnership between the two parties on a technical level …… and look forward to working together to achieve the AGI mission.”</p><p>In a new interview with Altman in the Financial Times on 13 November, the OpenAI boss said that the company’s partnership with Microsoft is “working well” and that he expects “we’ll be able to raise more money over time.”</p><h2 id="III-Idealist-Altman"><a href="#III-Idealist-Altman" class="headerlink" title="III. Idealist Altman"></a>III. Idealist Altman</h2><p>In 2005, Altman dropped out of school with two of his classmates to start a business, developing Loopt, a social media software that enabled them to share their location with their friends, and the famous startup incubator Y Combinator invested in the software.</p><p>Three years later, Steve Jobs invited Altman to attend the Apple mobile phone launch event, and the software entered the Apple APP shop. Since then, Altman has developed some more software, all of which were popular, and after operating for a while, sold it. in 2011, he became a partner of Y Combinator.</p><p>In the summer of 2015, at a Silicon Valley hotel, Altman hosted a private party to which he invited several industry luminaries, including Tesla’s Musk as well as Musk’s former Paypal partners Peter Thiel and Reid Hoffman.</p><p>At the time, they saw that Silicon Valley giant Google had acquired AI company Deep Mind for $400 million, and agreed that if Google and Deep Mind were successful, they would likely have a monopoly on AI technology. They couldn’t let that happen.</p><p>On 11 December that year, OpenAI was announced, with funding from Altman, Musk, Hoffman, Thiel and others, with a mission to ensure safe general artificial intelligence (AGI) for the benefit of humanity. At the time of its inception the nature was that of a non-profit organisation, which caught the attention of the industry.</p><p>Deep Mind released the sensational Alpha Go the following year, while Open AI was still in its infancy. Altman resigned from Y Combinator in 2019 to become CEO of OpenAI so he could focus on growing OpenAI.</p><p>It soon became clear to Altman that achieving the mission would require significant funding, which was hampered by the nature of a not-for-profit organisation, and in March 2019 OpenAI announced the creation of a for-profit entity to ensure that it would be able to raise sufficient funding while retaining the mission, governance and oversight of a not-for-profit organisation. The reason for this “nonprofit + for-profit corporation” structure is that “no existing legal structure that we know of strikes the right balance.”</p><p>It is indeed a paradox that in order to achieve his mission of “ensuring safe general artificial intelligence (AGI) for the benefit of mankind”, he has to raise huge sums of money, which needs to be returned, and a non-profit organisation can’t achieve that goal. So another for-profit entity, a limited liability company, had to be formed.</p><p>Altman also made an unusual decision:<strong>He would not take an equity stake in the company. Having invested in several highly successful tech startups himself, he was already very wealthy and didn’t need the money. Avoiding any equity would help him stay in line with his original mission</strong>.</p><p>This decision, however, actually turned off some of OpenAI’s potential investors, who suspected that by not taking an equity stake, Altman was not seeing a return on the project. Still, in July 2019, OpenAI received a $1 billion investment from Microsoft.</p><h3 id="IV-Is-being-kicked-out-of-the-company-you-founded-a-sign-an-honour"><a href="#IV-Is-being-kicked-out-of-the-company-you-founded-a-sign-an-honour" class="headerlink" title="IV. Is being kicked out of the company you founded a sign, an honour?"></a>IV. Is being kicked out of the company you founded a sign, an honour?</h3><p>As soon as the news of Altman’s dismissal broke, Musk’s social media platform “X” posted a “link to the job application” on its official account, with a “just in case anyone needs it”. Mockery with sympathy, which is very consistent with Musk’s characteristics, perhaps he is throwing an olive branch to Ottoman.</p><p>In early 2018, Musk told Altman that he thought OpenAI had fallen a long way behind Google, and offered to take control and run OpenAI himself.Back then, this had happened at Tesla, and Musk took over and was really successful, and he may have wanted to repeat the Tesla experience.</p><p>However, this time he didn’t get control, as Altman and the other founders unanimously rejected the offer. So on 20 February 2018, Musk announced that he was stepping down from OpenAI’s board of directors and would no longer be involved in its affairs in any way, citing a conflict of interest between Tesla’s development of Autopilot technology and OpenAI.</p><p>At the end of November 2022, ChatGPT exploded in popularity as soon as it was launched, and Musk was furious.On 17 February 2023, he tweeted, “OpenAI was created under the banner of open source, and that’s why I named it ‘Open’ ( ‘Open’) AI, originally a non-profit to be a force against Google, and now it’s become a closed-source, Microsoft-controlled, profit-maximising corporation.”</p><p>On 15 March, Musk tweeted again, “I’m still confused as to how a non-profit organisation that I donated about $100 million to in the first place has now become a for-profit company’s with a $30 billion market cap. If this is legit, why isn’t everyone following suit?”</p><p>Recently, OpenAI was in talks to sell its current employees’ shares at a valuation of $90 billion, with valuations skyrocketing over the course of the year.</p><p>Musk is understandably angry that non-profit organisations suddenly become for-profit companies, and that $100 million, if not a donation to a non-profit organisation, but an investment in a for-profit company, will still get him shares; however, that shouldn’t be the main reason for his anger.</p><p>A hundred million dollars is a piece of cake for him, however, he believes that Altman betrayed the original intent, purpose and mission of OpenAI when he helped him found it, which was supposed to be against Google, a potential monopoly in the field of AI, and ended up fuelling a real-life monopoly, Microsoft.</p><p>Now that Altman has been kicked out of OpenAI, it proves that Musk was wrong about him and that he and Microsoft are not in cahoots. Instead, it’s just the right time to go to Musk to continue working on AI against the monopoly that is Microsoft.</p><p>On 12 July this year, Musk announced in a tweet the establishment of an artificial intelligence company xAI, the company’s goal is to understand the true nature of the universe. 5 November, Musk’s xAI team released its first AI large model product - Grok. according to the introduction, Grok understands the world in real time through the X platform, and is also able to answer questions that are rejected by most of the other AI systems. other AI systems reject tough questions.</p><p>Musk will be happy to “take in” Altman, as he himself has twice been kicked out of companies he founded by the board of directors, a traumatic experience that will last a lifetime, and he was kicked out for reasons similar to Altman’s - he wanted to pursue his dream of building an independent, great company, not a third-rate one that was underwritten by the board of directors.</p><p>The other reason of course was his pugnacious management style. Either way, he should be easier to empathise with Ultraman.</p><p>Musk’s first pot of money came from Zip2, the online yellow pages company he founded. He wanted to buy the domain name “city.com” to compete head-to-head with Yahoo and America Online, but the board of directors preferred to relegate the product to the status of a no-name supplier to newspaper conglomerates. Musk resisted, and the board threw him out, reducing his power at the company and driving him out in disguise.</p><p>He was initially prepared to fight to the end, then interim CEO Prodion advised him, “This is your first company, let’s find an acquirer and make some money so you can do a second, third, and fourth company.”</p><p>Four years after Musk and his brother Kimball founded Zip2 in January 1999, Compaq bought it for $307 million in cash. 27-year-old Musk received $22 million.</p><p>He then started a second company, online payments company X.com, which later merged with Confinity, an online payments company founded by Thiel, Levchin and others, to launch the PayPal service. Till and the others took advantage of his honeymoon to stage a “coup d’état” and kick him out of the company.</p><p>Musk felt like he’d been stabbed in the back. “I’m so saddened by this that I can’t even begin to describe it,” he wrote in an email. “I’ve worked my ass off for this company, I’ve got almost all the cash on the books from Zip2, my marriage is on the line, and they say I’m full of shit, and they’re saying I’m full of shit. And they say I’m so full of evil that they don’t even give me a chance to complain.”</p><p>PayPal went public in 2002 and was acquired by eBay for $1.5bn in July that year. Musk got about $250 million in return. But it could have grown into a multi-trillion dollar company, and Musk started it back then with a vision of social media + a mega-bank, akin to what we’ve come to call WeChat here. His insistence on acquiring Twitter was to realise this early dream.</p><p>He said, “This is the mission that Twitter could be fulfilling in the future. You can create what I think X.com should be if you combine social media with a payment platform.”</p><p>Later Musk told Inc. magazine:”<strong>Great things are never born in the hands of venture capitalists and professional managers. They don’t have the creativity or the insight</strong>.”</p><p>Twice in three years, Musk was ousted from the company he founded, a far more harrowing experience than Steve Jobs was ousted from Apple back in the day. Apple deteriorated after Jobs left and finally had to bring Jobs back again to save the day.</p><p>I don’t know which script God gave to Ultraman, no matter what, people who have contributed to the development of human technology and progress of civilisation always deserve everyone’s respect, bless him.</p>]]></content>
    
    
    <summary type="html">On November 17, OpenAI, an American artificial intelligence research company, announced the dismissal of CEO Altman, sparking speculation and controversy. Altman was accused of not being honest enough and hampering the board&#39;s ability to perform its duties.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="Musk" scheme="https://www.nablepart.com/tags/Musk/"/>
    
    <category term="Starship" scheme="https://www.nablepart.com/tags/Starship/"/>
    
  </entry>
  
  <entry>
    <title>US domestic funds, why missed weight loss pills</title>
    <link href="https://www.nablepart.com/ef0bc5a91328/"/>
    <id>https://www.nablepart.com/ef0bc5a91328/</id>
    <published>2023-11-22T11:57:24.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>The recent $2 billion DEAL between Cheng Yi Bio and AZ has caused a big stir in the industry. However, in a burst of domestic VC, PE cheers, Xiaoyue found a very interesting phenomenon: <strong>U.S. local funds almost did not participate in the investment of several popular diet drugs</strong>.</p><p>In the case of Cheng Yi, for example, from the angel round established in 2018, to the A round in 2020 and the B round in 2023, the only US dollar fund that can be traced in the three rounds of financing is Kang Xi (Delos), and none of the US domestic funds participated.</p><h2 id="I-This-should-be-the-favourite-story-of-the-US-dollar-fund"><a href="#I-This-should-be-the-favourite-story-of-the-US-dollar-fund" class="headerlink" title="I. This should be the favourite story of the US dollar fund."></a>I. This should be the favourite story of the US dollar fund.</h2><p>This situation is not general: the founding team of Cheng Yi are all from MNC, and the certainty of what they are doing is also high, <strong>such a background, such a story, is the type of project favoured by American PE, VC</strong>.</p><p>If the sincere benefit can be explained as a team in Shanghai, the United States local funds can not reach (in fact, it is also very far-fetched, even if the domestic biotech, MNC background team to get the United States local institutions financing quite a lot, who do not have a few friends in the United States to do vc docking down).</p><p>Then look at another star project Struceture (Shuo Di biological) is obvious: this is also a diet pill star project, background is more prominent, well-known AI pharmaceutical listed company Schrödinger + industry bigwigs Raymond Stevens. Ray itself has rich entrepreneurial experience, or GPCR field well-known scientists, you can go to search the resume.</p><p>The company itself is headquartered in the U.S., and it’s still a popular track for diet pills, so it should be able to raise money in the U.S., right? No, Third Rock, MPM, F-Prime, all of them didn’t make any money. <strong>Company has been unable to raise money in the United States, it is China’s dollar fund to save it</strong>: from its inception in 2016 to the IPO in 2023, the investors are Sequoia China, Stowe Capital, Qiming Venture Capital, WuXi AppTec and so on.</p><p>Ditto for checking local US biotech financing news, there are indeed very few diet pill projects. In the U.S. venture capital is highly developed, no company means no one is willing to pay money.</p><p>US local institutions are more familiar with overseas new drug market. And the story of diet pills was first validated in Europe and the United States. It can be said that the A-share speculation on the concept of diet pills, than the U.S. stock market is at least 2 years behind. ** Regardless of investment preferences, from the track market space judgement, in the diet drug track, the U.S. fund has more reasons to go out than domestic funds. **</p><p>Shodi’s stock price is all the way up after the listing, and I think it must have made the U.S. institutions that didn’t invest very depressed. From 2016 to now, in the domestic institutions have stepped in the weight loss investment opportunities, <strong>why the U.S. domestic funds nearly collective missed this track in the weight loss drug</strong>?</p><p>Is it that U.S. funds do not like to invest in innovative drugs? No, over the past five years the U.S. funds have placed heavy bets in the field of innovative drugs, the scale of which is much larger than the domestic.</p><h2 id="ii-Behind-it-is-the-difference-in-the-perception-of-the-disease-field-between-China-and-the-United-States"><a href="#ii-Behind-it-is-the-difference-in-the-perception-of-the-disease-field-between-China-and-the-United-States" class="headerlink" title="ii. Behind it is the difference in the perception of the disease field between China and the United States"></a>ii. Behind it is the difference in the perception of the disease field between China and the United States</h2><p>Is it because the U.S. fund is not professional enough, can’t understand clinical data, and can’t judge the molecular value? Obviously not, their team is professional enough, and GLP-1 oral small molecule, it is not difficult to judge. There must be a reason behind it:</p><p>The point of this, I think, is that <strong>the difference in market perceptions of disease areas is really a huge difference between the two major drug markets in the US and China</strong>.</p><p>In the view of the United States local pharmaceutical investment institutions: the field of oncology is very suitable for biotech survival and development, the field of rare diseases biotech also has a very good opportunity to bend the road to overtake the car, and chronic diseases, especially in the field of metabolism, pharma occupies a dominant advantage, small biotech has no opportunity.</p><p>After decades of competition in the U.S. local drug market, there are abundant cases of biotechs having more opportunities: many drugs in the field of oncology come from biotechs, and in the field of rare diseases, there are small companies such as Shire, Alexion, Vertex, and so on, which have grown up to be giants. M&amp;A in these two fields is also the most.</p><p>In the view of domestic pharmaceutical investment organisations: <strong>the king and his majesty, would rather have seed, big pharmaceutical enterprises do well, I can also go to grab the cake, big pharmaceutical enterprises do not do well, I have to go on</strong>.</p><p> <strong>On the contrary, rare diseases are generally less popular because of affordability issues</strong>.</p><p>The domestic pharmaceutical industry, so far, has not yet established a pattern, giving biotechs the opportunity to overtake in various disease areas. In the oncology sector, RMB funds are more cautious than USD funds due to the difficulty of setting up a sales force in China.</p><p>I think there are 2 reasons why US funds missed the diet drug wave:</p><ul><li><strong>more concerned about oncology projects</strong></li><li><strong>Feel that there are not many opportunities for small companies in the metabolic field</strong></li></ul><h2 id="III-The-more-specialised-you-are-in-metabolism-the-more-scared-you-are"><a href="#III-The-more-specialised-you-are-in-metabolism-the-more-scared-you-are" class="headerlink" title="III. The more specialised you are in metabolism, the more scared you are"></a>III. The more specialised you are in metabolism, the more scared you are</h2><p>There is a saying in the investment world that the more you know about metabolism, the more you are afraid to invest in projects in this field. Why? <strong>Metabolic field attaches great importance to long-term safety and dosing cycle advantages, phase III requires a large number of people, large investment, and long follow-up time, several MNCs led by Eli Lilly, Novo Nordisk, with the first mover advantage of insulin, have established a dominant hegemony in the field for more than 10 years</strong>, which is mainly embodied in the following:</p><p><strong>Clinical data crushing</strong>: The giants are relentlessly attacking each other with head-to-head clinicals of nearly 10,000 people, showing no mercy, and in this kind of battlefield where billions of dollars are being invested, small biotechs can’t even be counted as cannon fodder. Calculating how rich the current clinical data of Simeoglutide, the amount of money invested behind the scared to death, the small Yue sold itself for a million years may not be enough. biotech financing small hundreds of millions of dollars, completely insufficient ah!</p><p><strong>Pipeline layout comprehensive leading</strong>: Big pharmaceutical companies on the one hand, the existing drug dosage optimisation, drug delivery cycle extension, improve competitiveness and life cycle, on the other hand, advance the layout of the next generation or even the next generation of drugs, such as the GLP-1 triple target. biotech what to take and MNC to go to the PK, you only have five threes, other people have a straight flush + king of bombs ah.</p><p>In the metabolic field, the more specialised you are, the more fearful you are. Especially the dollar funds, most of them know Eli Lilly and Novo Nordisk very well, see the metabolism of new drug projects equal to the bottomless pit of clinical development, the psychology of these two MNC indestructible, naturally unwilling to pay too much attention to metabolism projects.</p><p><strong>In a sense, this argument also holds true in the Chinese market, but only for the diabetes field</strong>. Koshi also believes that it is difficult to invest in diabetes projects in the domestic market. Because in the diabetes field, also formed a stable pattern, Eli Lilly &#x2F; Novo Nordisk + Gan &amp; Lee &#x2F; Tonghua Dongbao, 4 big giants have occupied the market for many years.</p><p>Diet drugs are different, due to regulatory approval of diet drugs for indications later than the foreign market, the future data PK will not be as intense as abroad, clinical investment will not be as outrageous as the United States, the Health Insurance Bureau also welcome more entrants, the proportion of domestic fat than abroad, and not as fat as the fat abroad. Under the superposition of all these reasons, the domestic market for weight loss drugs can be said to have just sprouted, and the pattern has not yet been determined, and biotech still has a lot of opportunities. This is also the reason why domestic institutions dare to go out.</p><p>**People can only ever earn money within the scope of their own perception, and this, already enough. **When investing in the metabolism field, you still have to be very vigilant. Don’t be like NASH and get directly affected by GLP-1.</p>]]></content>
    
    
    <summary type="html">The article explores why U.S.-based funds have little to no involvement in the weight loss drug investment space, and points to the difference in perception between the two major drug markets in China and the United States as one of the reasons.</summary>
    
    
    
    <category term="Finance" scheme="https://www.nablepart.com/categories/Finance/"/>
    
    
    <category term="Artical" scheme="https://www.nablepart.com/tags/Artical/"/>
    
  </entry>
  
  <entry>
    <title>Unlocking the Black Art of MySQL: Transactions and Isolation</title>
    <link href="https://www.nablepart.com/f1af130ebf50/"/>
    <id>https://www.nablepart.com/f1af130ebf50/</id>
    <published>2023-11-06T23:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p><img src="https://s2.loli.net/2023/11/07/vaxf5TzS4n3sNq7.webp"></p><p>In MySQL, the most frequently asked questions are transaction, isolation level, and MVCC, whether it is a large Internet company, a small factory, or even a state-owned enterprise, their coverage rate is as high as 80%.</p><p>In fact, the interviewer also knows that everyone will memorize the eight-legged text, but can say that understand, and even say thorough candidates are very rare.</p><p>So today I’m going to take you to unlock the black technology hidden in the bottom of MySQL: transactions and isolation.</p><h2 id="2-Transactions"><a href="#2-Transactions" class="headerlink" title="2. Transactions"></a>2. Transactions</h2><h2 id="2-1-Straight-Reward"><a href="#2-1-Straight-Reward" class="headerlink" title="2.1 Straight Reward"></a>2.1 Straight Reward</h2><p>First, let’s talk about transactions.</p><p>Transactions are like a magic show that **ensure that a series of database operations either all execute successfully or none at all. **</p><p>Let’s say you’re watching a live stream and you want to reward the hostess with 500 bucks. You need to deduct your account balance and increase the hostess’s account amount at the same time.</p><p><img src="https://s2.loli.net/2023/11/07/95FQwdimPVATEDf.webp"></p><p>If one of the two operations of transferring money fails, then you can lose money or have it disappear and the beauty queen won’t receive it.</p><p>This is where transactions come in handy.</p><p>It can ensure that both operations either succeed or fail at the same time, there will never be an embarrassing situation of half-success and half-failure.</p><p>So, let’s <strong>summarize:</strong></p><ul><li><p>Q: Why do databases have transactions?</p></li><li><p>A: In order to ensure that the business runs properly and the data is ultimately consistent.</p></li></ul><blockquote><p>For those who don’t understand what ultimate consistency is, take a look at this previous post of mine: <a href="https://link.juejin.cn/?target=http://mp.weixin.qq.com/s?__biz%25">In-depth: Distributed, CAP, and BASE Theory</a> 3DMzI5Nzk2MDgwNg%3D%3D%26mid%3D2247484896%26idx%3D1%26sn%3D60dd09486fc9ecc652af917d8a311419%26chksm% 3Decac51e9dbdbd8ffc10b79699ea7e4a8fb00aabc743b15cc5c3311970a9e3046592cbb879364%26scene%3D21%23wechat_redirect “<a href="http://mp.weixin.qq/">http://mp.weixin.qq</a>. com&#x2F;s?__biz&#x3D;MzI5Nzk2MDgwNg&#x3D;&#x3D;&amp;mid&#x3D;2247484896&amp;idx&#x3D;1&amp;sn&#x3D;60dd09486fc9ecc652af917d8a311419&amp;chksm&#x3D; ecac51e9dbdbd8ffc10b79699ea7e4a8fb00aabc743b15cc5c3311970a9e3046592cbb879364&amp;scene&#x3D;21#wechat_redirect”)</p></blockquote><h3 id="2-2-Transaction-Characterization"><a href="#2-2-Transaction-Characterization" class="headerlink" title="2.2 Transaction Characterization"></a>2.2 Transaction Characterization</h3><p>Now that you understand what a transaction is and why you need one, let’s talk about the 4 characteristics of transactions.</p><p>Let’s talk about the 4 characteristics of transactions: <strong>Atomicity, Consistency, Isolation, and Durability</strong>, or ACID for short.</p><h4 id="Atomicity"><a href="#Atomicity" class="headerlink" title="Atomicity"></a>Atomicity</h4><p>Atomicity means that <strong>a transaction contains operations that are either all successful or all unsuccessful</strong>.</p><p>For example, the initial balances of accounts A and B are $800 and $100. At this point, A transfers $500 to B. The breakdown is A account minus $500 and B account plus $500.</p><p>The end result is that account A has a balance of $300 and account B has a balance of $600. The operation of updating the balance of these two accounts is either performed in full or not performed at all.</p><p>Taking the example of rewarding a beautiful anchorwoman, atomicity ensures that either the money is still there, or the money is transferred to the account of the anchorwoman and the anchorwoman’s <strong>thank you brother</strong> is rewarded!</p><h4 id="Consistency-Consistency"><a href="#Consistency-Consistency" class="headerlink" title="Consistency (Consistency)"></a>Consistency (Consistency)</h4><p><strong>The state of consistency is maintained before the transaction is executed, and after it is executed</strong>.</p><p>Two things happen to accounts A and B after a transfer:</p><ol><li>the money is transferred to account B. At this point, accounts A and B are $300 and $600 respectively;</li><li>the money is transferred out of the process of database network disconnection, the transaction is rolled back, A, B account or 800, 100 dollars.</li></ol><p>In any case, before and after the transaction, the total amount of A and B bank accounts should be $900, which is inconsistent.</p><h4 id="Isolation"><a href="#Isolation" class="headerlink" title="Isolation"></a>Isolation</h4><p>Isolation is when more than one user accesses the database concurrently, no matter whether it is operating the same library or the same table, the transaction opened by the database for each user can not be interfered by the operation of other transactions, and multiple concurrent transactions should be isolated from each other.</p><p>For example, when A transfers money to B, no matter how others transfer money, it will not affect their transactions.</p><p><img src="https://s2.loli.net/2023/11/07/J8RdDvo3PZykHh2.webp"></p><p>Taking the example of giving a reward to a beautiful anchorwoman, isolation is: no matter how many people are giving the anchor a reward, it will not affect your affairs of transferring money, and it will not affect the anchor to call you a <strong>good brother</strong>!</p><h4 id="Persistence-Durability"><a href="#Persistence-Durability" class="headerlink" title="Persistence (Durability)"></a>Persistence (Durability)</h4><p><strong>Once a transaction has been submitted, then the change to the data in the database is persistent [i.e., saved to disk]</strong> , even in the case of database system failure will not be lost to submit the operation of the transaction.</p><p>Take the example of rewarding a beautiful anchorwoman, persistence is: you just transfer money to the anchor, the money into her account, no matter how many voices <strong>thank you good brother</strong> harvested anchor, the money can not come back.</p><p>Next, we **summarize: **</p><ul><li><p>Q: Why do transactions have these characteristics?</p></li><li><p>A: We want to ensure that the data consistency of the transaction, we need some means to achieve, these means are several characteristics of the transaction.</p></li></ul><p>They are atomicity, consistency, isolation, and persistence, where ** consistency is the goal, and atomicity, consistency, and isolation are all means to achieve data consistency**.</p><h2 id="3-Transaction-Concurrency-and-Isolation"><a href="#3-Transaction-Concurrency-and-Isolation" class="headerlink" title="3. Transaction Concurrency and Isolation"></a>3. Transaction Concurrency and Isolation</h2><h3 id="Transaction-concurrency"><a href="#Transaction-concurrency" class="headerlink" title="Transaction concurrency"></a>Transaction concurrency</h3><p>Concurrency is the ability of a computer system or program to handle multiple tasks or operations at the same time, that is, to allow multiple user processes to work on the same critical area.</p><blockquote><p>For those who want to understand concurrency from a process or processor perspective, see this previous article of mine: [GPM Scheduling Model](<a href="https://link.juejin.cn/?target=http://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg%25">https://link.juejin.cn?target=http%3A%2F%2Fmp.weixin.qq.com%2Fs%3F__biz%3DMzI5Nzk2MDgwNg%</a> 3D%3D%26mid%3D2247484182%26idx%3D1%26sn%3D6d3f54eea5622a2d7f6323cbb553fdd8%26chksm% 3Decac571fdbdbde09cc8beb982e5df0caafdf5c87587cd3fbd69ca86c33724e9368ab957beac3%26scene%3D21%23wechat_redirect “<a href="http://mp.weixin.qq/">http://mp.weixin.qq</a>. com&#x2F;s?__biz&#x3D;MzI5Nzk2MDgwNg&#x3D;&#x3D;&amp;mid&#x3D;2247484182&amp;idx&#x3D;1&amp;sn&#x3D;6d3f54eea5622a2d7f6323cbb553fdd8&amp;chksm&#x3D; ecac571fdbdbde09cc8beb982e5df0caafdf5c87587cd3fbd69ca86c33724e9368ab957beac3&amp;scene&#x3D;21#wechat_redirect”)</p></blockquote><p>Take the reward anchor for example, concurrency is multiple viewers want to reward the anchor, if you transfer money together, then the anchor account balance how to modify it?</p><p><img src="https://s2.loli.net/2023/11/07/KJnSksRliU54F9Z.webp"></p><p>The task here is to transfer money, the user process is the server process responsible for the transaction, and the critical zone is the storage space for the anchor account.</p><p>If transaction concurrency occurs, it can cause some unexpected problems, such as the common <strong>Dirty Write, Dirty Read, Duplicate Read, and Phantom Read</strong>.</p><h3 id="Dirty-writes"><a href="#Dirty-writes" class="headerlink" title="Dirty writes"></a>Dirty writes</h3><p>Dirty writing means that during transaction concurrency, <strong>one transaction can modify the data of another ongoing transaction, which may result in one write transaction overwriting the data of another write transaction</strong>.</p><p>When you and Xiao Shuai together to the beauty of the hostess reward, you rewarded 500 dollars, Xiao Shuai rewarded 1,000 dollars, in the write database, you write the data is Xiao Shuai’s data to be overwritten.</p><p>The final result is that your money is gone, and the hostess is saying <strong>thanks for the reward</strong> in the live broadcast!</p><h3 id="Transaction-isolation"><a href="#Transaction-isolation" class="headerlink" title="Transaction isolation"></a>Transaction isolation</h3><p>500 bucks is gone, and the hostess is still ignoring you, you’re sad, but you don’t know what to do?</p><p>Don’t be sad! Transaction isolation can help you.</p><p><img src="https://s2.loli.net/2023/11/07/FM8A7eVjqhanWwR.webp"></p><p>MySQL provides transaction isolation levels, including: <strong>Read uncommitted, Read committed, Repeatable reads, and Serialization</strong>, to solve various concurrency problems in transactions, and to cure all kinds of unhappiness.</p><h3 id="RU-Read-uncommitted"><a href="#RU-Read-uncommitted" class="headerlink" title="RU - Read uncommitted"></a>RU - Read uncommitted</h3><p>RU (Read Uncommitted) means that if a transaction starts writing data, another transaction is not allowed to write at the same time, but other transactions are allowed to read this row of data.</p><p>RU can exclude writes, but does not exclude read thread implementations.</p><p>This isolation level solves the dirty write problem above, but there may be <strong>dirty reads, i.e., transaction B reads data that transaction A has not committed</strong>.</p><p>You want to give the beautiful anchorwoman reward 500 bucks, found that the bank card balance is only 300 bucks, then you thought of a few days ago you borrowed 500 bucks Xiaoshuai, so let Xiaoshuai pay back the money.</p><p>Xiao Shuai is very clear about the database isolation mechanism, know that you are in the RU transaction isolation level. So he says he’ll pay you back immediately, and the following scenario occurs:</p><p><img src="https://s2.loli.net/2023/11/07/U5QlW9EceiO4YVT.webp"></p><ul><li><p>Shuai: open transaction A, transfer money to you 500, the transaction is not submitted;</p></li><li><p>You: open transaction B, check the balance, and find that the balance has been added 500, so the Shuai Shuai debit note torn off, and ready to give the anchor reward;</p></li><li><p>Shuai: see the debit note is gone, so revoke the transaction A. His money is not a penny less, and you only read the balance in his transaction A, but the real balance did not increase, that is, a dirty read has occurred;</p></li><li><p>You: The balance of the reward payment is insufficient, and you lose the debit note worth 500 bucks.</p></li></ul><p>You are so disappointed that you plan to cut ties with Handsome and continue to learn the rest of the isolation mechanism to see how to prevent dirty reads from occurring.</p><h3 id="RC-Read-committed"><a href="#RC-Read-committed" class="headerlink" title="RC - Read committed"></a>RC - Read committed</h3><p>This level of isolation prevents other transactions from accessing a row of data (both read and write) while it is being written to by one transaction. This ensures that the data read by a transaction is committed, <strong>solving the problem of dirty reads</strong>.</p><p>However, RC will appear ** unrepeatable read ** problem, for example: transaction A need to read the data twice, after reading the first data, there is another transaction B to update the data and submit the transaction.</p><p>At this time, transaction A read the data again, the data has changed, that is, ** transaction in the two read data inconsistent **.</p><p>In order to salvage his friendship, Handsome transfers you 520 bucks, but he thinks it’s okay to pay you back only 500 bucks, so he asks you to pay him back 20 bucks.</p><p>You’ll be too busy watching the show, so you won’t have time to transfer the money. He suggests that you tell him the account password of your bank card, and he’ll only transfer 20 bucks.</p><p><img src="https://s2.loli.net/2023/11/07/YIzGXKCmLRN5Jl3.webp"></p><p>To be on the safe side, you open a transaction to check the card balance and tell Shuai the password, the following scenario happens next:</p><ul><li>You: open transaction A, inquired about the bank card balance of 820;</li><li>You: open transaction B, withdraw 800, and submit transaction B. * You: open transaction B, withdraw 800, and submit transaction B;</li><li>You: in transaction A again to check the balance, found that the bank card only 20 dollars, the occurrence of unrepeatable read.</li></ul><p>Not only did you not get the money you borrowed, but you lost 280 bucks. The more you think about it, the more angry you get, and you scold the marshal. Then you continue to study the isolation mechanism to see how to prevent the unrepeatable read problem.</p><h3 id="RR-Repeatable-read"><a href="#RR-Repeatable-read" class="headerlink" title="RR - Repeatable read"></a>RR - Repeatable read</h3><p>When the same data is read multiple times within the same transaction, no other transaction can access the data (including reads and writes) while the transaction is still open.</p><p>This isolation level ** solves the problem of dirty reads and unrepeatable reads ** but there may be ** phantom reads **.</p><p>If transaction A reads the data several times, another transaction B inserts or deletes data in the middle of the data rows, then transaction A reads again, you may find that the number of rows of data has changed.</p><p>In short, **RR - Repeatable Read ensures that the current transaction will not read other transactions have committed <code>update</code> operations, but can not sense other transactions <code>insert &amp;#x548C; delete</code> operations **.</p><p>Shuai knows you won’t lend money again, and was scolded by you, and is resentful. So he thought of using your bank account to mess things up, and the next scenario happened:</p><ul><li>You: open transaction A, want to query the transaction just a few times, the transaction to see the result is 2 times;</li><li>Shuai: open transaction B, found that you can not modify your balance data, so simply to your bank card inside the write 100 times the transaction record, the transaction amount up to tens of millions of dollars, submit transaction B. * You: continue inside transaction A, the transaction amount up to tens of millions of dollars;</li><li>You: inside the transaction A continue to query the number of transactions, found that it became 102 times;</li></ul><p>At this point, the police uncle came to the door, said someone reported you malicious money laundering, need to assist in the investigation.</p><p><img src="https://s2.loli.net/2023/11/07/jawgtfP8QVhFcqH.webp"></p><p>Luckily, after some explanation and investigation through the logs of the bank’s database, it was discovered that someone had maliciously tampered with the transaction records, and you returned home safe and sound.</p><p>At this point, you learned the hard way and were shocked to realize that you had made a bad friend! So you sink your teeth into the isolation mechanism.</p><h3 id="Serializable"><a href="#Serializable" class="headerlink" title="Serializable"></a>Serializable</h3><p>At this isolation level, transactions can only be executed sequentially, <strong>solving the problems of dirty reads, unrepeatable reads and phantom reads</strong>. However, it is more expensive and has very low performance, and is generally seldom used.</p><p>In this case, every time a viewer, like you, wants to give a reward to the anchor, you need to wait in line until the previous transaction transaction is completely finished.</p><p>At this point, you learn about the wonders of transactions and the importance of isolation, and plan to learn about databases and stop watching beautiful anchors dance.</p><p><img src="https://s2.loli.net/2023/11/07/CpQ3D67UbVwc5Fl.webp"></p><p>Handsome, on the other hand, gets lost further and further down the road of bureau-oriented programming.</p><h2 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h2><p>Let’s summarize that databases solve the various problems that arise from transaction concurrency through isolation levels:</p><ul><li>RU, read uncommitted solves the problem of dirty writes, but dirty reads may occur;</li><li>RC, read committed solves the dirty read problem, but unrepeatable reads may occur;</li><li>RR, Repeatable solves the problem of unrepeatable reads, but phantom reads may occur;</li><li>Serializable, serialization solves the problem of phantom reads, but performance is low.</li></ul><blockquote><p>How does MySQL implement transaction isolation?</p></blockquote><p>The answer is locking. The higher the transaction level, the more concurrent transactions are solved, which also means more locks are added.</p><p>Comparison of number of locks: RU-Read Uncommitted &lt; RC-Read Committed &lt; RR-Repeatable &lt; Serializable-Serialized.</p><p><img src="https://s2.loli.net/2023/11/07/neNL5jRqzHZv1Od.webp"></p><p>However, frequent locking may result in no way to modify the data when reading it, and no way to read it when modifying it, which greatly reduces the database read and write performance, just like the serialized isolation level.</p><p>So, in order to trade off data security and performance, MySQL databases use RR, the Repeatable Read isolation level, by default.</p>]]></content>
    
    
    <summary type="html">Frequent locking may result in reading data with no way to modify it, and modifying data with no way to read it, greatly degrading database read and write performance, just as serialized isolation levels do.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="MySQL" scheme="https://www.nablepart.com/tags/MySQL/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="serialized" scheme="https://www.nablepart.com/tags/serialized/"/>
    
    <category term="result" scheme="https://www.nablepart.com/tags/result/"/>
    
    <category term="Frequent" scheme="https://www.nablepart.com/tags/Frequent/"/>
    
  </entry>
  
  <entry>
    <title>The Classic Massive Data Problem</title>
    <link href="https://www.nablepart.com/3c36e910155a/"/>
    <id>https://www.nablepart.com/3c36e910155a/</id>
    <published>2023-11-06T22:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Introduction</li><li>five approaches to solving the massive data problem</li><li>Classic Scenario Examples</li><li>Summary</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>In recent years, high concurrency, distributed, and big data have become a topic that back-end developers can’t get around. When the recruitment software says that high concurrency, big data, and other project experience is preferred, I’m sure a lot of people are secretly shocked, ** projects are CRUD, and there is no chance to get in touch with these scenarios. **.</p><p>However, a great man once said: there are no conditions, to create conditions. Since the work can not be exposed to high concurrency and big data, we can bend the road to overtake the car - usually more attention to similar scenarios in the learning time.</p><p>This article describes the common means of solving big data problems, as well as some classic big data scenarios and solutions, which **includes bytes, Ali, Baidu and other major domestic Internet companies interviews with the original questions **. After reading this, I believe that we will be able to be familiar with these big data related problems next time we encounter them on a project or in an interview.</p><h2 id="2-Five-approaches-to-solving-massive-data-problems"><a href="#2-Five-approaches-to-solving-massive-data-problems" class="headerlink" title="2. Five approaches to solving massive data problems"></a>2. Five approaches to solving massive data problems</h2><h4 id="2-1-Partitioning"><a href="#2-1-Partitioning" class="headerlink" title="2.1 Partitioning"></a>2.1 Partitioning</h4><p>It is well known that ** the computational time required for any computer-solvable problem is related to its size. ** The smaller the size of the problem, the easier it is to solve and the less computation time is required. So, for large-scale data problems, we can also divide and conquer and then just merge the results.</p><p><img src="https://s2.loli.net/2023/11/07/XAT5PfrsKiaqn7R.webp"></p><p>Partitioning algorithm is an efficient tool for solving complex problems, it can decompose a problem into several sub-problems, solve these sub-problems one by one, and then combine them together to form an answer to the big problem.</p><p>Computer problems dealing with large amounts of data, the idea of partitioning is basically able to solve, but in general will not be the optimal solution, but can be used as a baseline. for example, in some scenarios we may need to gradually optimize the sub-problems to achieve an optimal solution. <strong>The traditional subsumption sort is the idea of partitioning, involving a large number of files that can not be loaded into memory, sorting and other problems can be solved using this method</strong>.</p><p>Classical scenarios of the partition algorithm: large amounts of data can not be loaded into memory, in order to save time and overhead parallel synchronization to solve the problem and so on.</p><h4 id="2-2-Hash-Hash"><a href="#2-2-Hash-Hash" class="headerlink" title="2.2 Hash (Hash)"></a>2.2 Hash (Hash)</h4><p>Hash, hash function, as defined by Wikipedia hash is a method of creating a “fingerprint” of small numbers from any data. In layman’s terms, the hash algorithm allows data elements to be located and queried more quickly.</p><p><img src="https://s2.loli.net/2023/11/07/tcbv78GeFEYwkZR.webp"></p><p>Hash has a time complexity of O(1) for obtaining a key. Assuming that there are n numbers stored in memory by the hash function, we can obtain the value of a key in a constant amount of time.</p><p>With such efficient access to data, hash is clearly a great tool for solving big data problems.</p><p>And using hash to solve the data access problem is a pretty brutal way to do it: just put it in and take it out! The only downside to **hash is that it’s memory intensive, and you need to load all the data into memory. **Hash</p><p>Hash Scenario: Fast lookups, but need enough memory to hold all the data. In this case, if the amount of data is so large that the memory is not enough to store all the data, it can be combined with partition + hash to deal with it. In fact, we often use this approach when solving real-world problems.</p><h4 id="2-3-Bitmap-BitMap"><a href="#2-3-Bitmap-BitMap" class="headerlink" title="2.3 Bitmap (BitMap)"></a>2.3 Bitmap (BitMap)</h4><p>The core idea of the BitMap algorithm is to use a list composed of bits (bit) bits to record the two states 0-1, and then map the specific data to the specific location of this bit array, bit position 0 means that the data does not exist, set to 1 means that the data exists.</p><p>** bitmap through the array to indicate the existence of certain elements, it can be used for fast lookup, reweighting and sorting ** and so on. Bitmap is more widely used in big data scenarios, because it can save a lot of space utilization.</p><p>The next two classic scenarios, weighing and sorting to see the effectiveness of bitMap in which it plays a role.</p><p>Scenario 1: Finding non-repeating positive integers among 200 million integers</p><p>**Bitmap Weighting</p><p>Since there is not enough memory to hold these integers, we can use bitmaps. Using a 2-BitMap (each number is assigned 2 bits, 00 for nonexistent, 01 for occurring once, 10 for occurring multiple times), iterate through all the elements, set the position of the element that has been iterated over to 01, and if it is iterated over again set it to 10. Finally, count all the elements that have 01 (occurring only once) in their position.</p><p>Scenario 2: Sorting some integer elements [6,4,2,1,5</p><p><strong>Bitmap Sorting</strong></p><p>Since the elements of the array to be sorted are all less than 8 (and may be much more than that in real scenarios), we can request an 8bit array (one byte) and then iterate over these elements, setting the position subscripts of the existent elements to 1 [0 1 1 1 1 0 1 1 1 0].</p><p>After traversing, we again traverse this bit array, you can get the final sorting results:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">for <span class="selector-tag">i</span> := <span class="number">0</span>; <span class="selector-tag">i</span> <span class="selector-tag">i</span>++ &#123;</span><br><span class="line">   if bitMap<span class="selector-attr">[i]</span> == <span class="number">1</span> &#123;</span><br><span class="line">      println(<span class="selector-tag">i</span>)</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-4-Bloom-Filter"><a href="#2-4-Bloom-Filter" class="headerlink" title="2.4 Bloom Filter"></a>2.4 Bloom Filter</h4><p>Bloom filter was proposed by Bloom in 1970, it is actually composed of a very long <strong>binary vector</strong> and a series of <strong>random hash functions</strong>.</p><p>Bloom filter is mainly used to determine whether the target data exists in a massive dataset and set of intersection. To determine the existence of an example: Bloom filter through the target data mapping, can be O(k) time complexity to determine the existence of the target data, where k for the use of the number of hash functions, which can greatly reduce the traversal of the time required to find.</p><p><img src="https://s2.loli.net/2023/11/07/ki5tSxTXdoR4upz.webp"></p><p>The workflow of the Bloom filter is shown in the figure above, we next simulate the Bloom filter to do element adding and querying operations.</p><ol><li>Adding elements<br>Bloom filter to add elements, the use of different hash functions on the element value of the hash calculation, so as to get more than one hash value.<br>For example, in the figure above, the element 1, after three hashes, after a series of operations to calculate the remainder of the three numbers 3, 5, 6. Then, we will be in the array of 3, 5, 6 three subscripts set to 1, the addition of the element 1 operation is complete.</li><li>Searching for elements<br>In the use of Bloom filters to query the existence of an element, first of all, the given element to perform a number of hash operations, and get the same as when the element is added to the position of the bit array.<br>For example, for element 1 to perform three Hash operations, the position of the bit array were 3, 5, 6, to determine whether the value of these positions are all 1, if one of them is 0, the element does not exist; if they are all 1, the probability that the element exists.</li><li>Element misjudgment<br>When the Bloom Filter adds elements, the array position of the element after the Hash may be set to 1 by other elements, for example, after the element x has been Hashed 3 times, the hit position array subscripts are 0, 3, 6, and the values of these three subscripts are set to 1 when inserting the element 1 and element 2.<br>Therefore, when querying element x, it will be misjudged that element x already exists, which is the root cause of the misjudgment - multiple hash values of an element may be hit by other elements accidentally, so it is not possible to determine that the element must exist.<br>Although BloomFilter has a certain rate of misjudgment, but the probability is very small, about 0.01% ~ 0.05% between, so in some big data scenarios, the use of BloomFilter can significantly improve the efficiency of the judgment, provided that you can accept a small margin of misjudgment.<br>2.5 Heap<br>Heap is a kind of complete binary tree with two structures: big top heap and small top heap. A big top heap is characterized by its parent nodes being greater than or equal to the values of its left and right child nodes, while a small top heap is the opposite, with its parent nodes being less than or equal to the values of its left and right child nodes.</li></ol><p><img src="https://s2.loli.net/2023/11/07/ZdgcVMmI7z1eTEN.webp"></p><p>When building a <strong>big top heap</strong> or <strong>small top heap</strong>, we need to use heap sort.</p><p>The core idea of heap sort (big top heap example) is: keep comparing the values of all the parent nodes and child nodes, and exchange the larger number into the parent node, after completing one round of comparison, <strong>the maximum value of the whole sequence is the root node at the top of the heap</strong>.</p><p>Then, swap the root node with the end node (the node labeled 8 in the graph), which is then the maximum value.</p><p>Then, the remaining n-1 elements are reconstructed into a big top heap, so that the maximum value of the n-1 elements is obtained, and the cycle is repeated to obtain an ascending sequence.</p><blockquote><p>Q: Why don’t we use small top heap for ascending?<br>A: Because there is no way to guarantee the incrementality of each level. For example, the 5th and 8th elements of the small top heap in the above figure are 26 elements.</p></blockquote><p>In large amount of data, <strong>heap sort is a commonly used solution to the TopN problem</strong>, and it can satisfy most of the problems of finding the most value.</p><h2 id="3-Examples-of-Classic-Scenarios"><a href="#3-Examples-of-Classic-Scenarios" class="headerlink" title="3. Examples of Classic Scenarios"></a>3. Examples of Classic Scenarios</h2><h2 id="1-Finding-non-repeating-integers-in-a-large-number-of-numbers"><a href="#1-Finding-non-repeating-integers-in-a-large-number-of-numbers" class="headerlink" title="1) Finding non-repeating integers in a large number of numbers."></a>1) Finding non-repeating integers in a large number of numbers.</h2><h4 id="Problem-Description"><a href="#Problem-Description" class="headerlink" title="Problem Description"></a>Problem Description</h4><p>Find non-repeating integers out of 10 billion integers. Note that there is not enough memory to hold so many integers.</p><h4 id="Question-Answer"><a href="#Question-Answer" class="headerlink" title="Question Answer"></a>Question Answer</h4><p><strong>1. Partitioning + HashMap</strong></p><p>10 billion integers, each int integer occupies 4 bytes, 100 x 10^8 x 4B is about 400GB. if we read it into memory as a file, it definitely won’t fit.</p><p>So we can split the large file into small files, such as through the Hash operation, will be 10 billion integers assigned to 1000 files, each file to store 10 million integers, these files will be numbered 0 ~ 999, and then they were traversed into the HashMap, where the key is an integer, the value for the number of times it occurs.</p><p>Then, the 1000 HashMap were counted out only 1 time in the integer, and then merge the final result.</p><p>**Bitmap method</p><p>Since integers occupy more memory, we can use one or more bits to mark whether an element occurs and the number of times it occurs, which can greatly save storage space.</p><p>We use 2 bits to indicate the state of occurrence of a number: 00 means it has not appeared, 01 means it has appeared once, and 10 means it has appeared many times. 10 billion integers are of int type, each integer occupies 4 bytes, i.e., 32 bits, and the memory required is 2^32 x 2bit &#x3D; 1GB.</p><p>When the available memory is greater than 1GB, the bitmap method can be used to solve this problem. By iterating through the 10 billion numbers, the corresponding subscripts 00-&gt;01 (integer occurs once), 01-&gt;10 (integer occurs many times), and finally counting the number of 01 (occurs only once).</p><p>**Bloom filter</p><p>Iterate through the 10 billion integers, they will be deposited into the Bloom filter, before depositing through a number of Hash to determine whether the value of the bit array exists, if it exists that the probability of the existence of the integer; if the value of a Hash does not exist in the bit array, that the integer must not exist.</p><h3 id="2-Duplicate-URLs-found-in-two-large-files"><a href="#2-Duplicate-URLs-found-in-two-large-files" class="headerlink" title="2) Duplicate URLs found in two large files"></a>2) Duplicate URLs found in two large files</h3><h4 id="Problem-Description-1"><a href="#Problem-Description-1" class="headerlink" title="Problem Description"></a>Problem Description</h4><p>Given two files a.txt, b.txt, each holding 5 billion URLs, with each URL occupying 64B, and a memory limit of 4G, find the URLs common to files a and b. Find the URLs that are common to both files.</p><h4 id="Answer"><a href="#Answer" class="headerlink" title="Answer"></a>Answer</h4><p>The file size of 50 x 10^8 x 64B is about 320GB, so 4G memory is not enough to load all URLs into memory at once.</p><p><strong>Partitioning + HashMap</strong></p><p>This question is very similar to the previous one, except that the integers are replaced by URLs, and we handle it in a similar way.</p><p>First, divide the file where the URLs are stored into multiple small files, so that each small file is no larger than 4G, so that the two small files corresponding to a and b can be processed in memory, and the results can be merged in the end.</p><p>First traverse the file a, traversed to the URL by hash mode: hash (URL) % 100, according to the results of the calculation of the URL into a0, a1 … a99.txt, each of which is about 3.2 GB in size. b is traversed in the same way, and the URL is split into b0, b1 … b99.txt. b99.txt.</p><p>After splitting, all possible identical URLs are in the corresponding small files, i.e., a0 corresponds to b0, …, and a99 corresponds to b99. All we need to do is to find the URLs that are the same in these 100 pairs of small files.</p><p>To find the same URL, we can use HashSet&#x2F;HashMap, when the URL exists in Set&#x2F;Map, it means the URL is duplicated, so we can save the duplicated URLs in a separate file, and then merge all the same URLs in the end.</p><h3 id="3-Top100-for-frequency-of-request-for-large-session-files"><a href="#3-Top100-for-frequency-of-request-for-large-session-files" class="headerlink" title="3) Top100 for frequency of request for large session files"></a>3) Top100 for frequency of request for large session files</h3><h4 id="Problem-Description-2"><a href="#Problem-Description-2" class="headerlink" title="Problem Description"></a>Problem Description</h4><p>There is a file of 100 million conversations, each line of the file is a conversation, the size of each conversation is not more than 1KB, the memory size is limited to 1.5 GB, and it is required to return the 100 conversations with the highest frequency (Top100).</p><h4 id="Question-Answer-1"><a href="#Question-Answer-1" class="headerlink" title="Question Answer"></a>Question Answer</h4><p>Due to memory constraints, it is not possible to read large files into memory at once. Therefore, the same partitioning strategy is used to split large files into smaller ones for processing.</p><p><strong>1. Partitioning + HashMap</strong></p><p>First, we use the same way as the previous question to partition the file for processing.</p><p>1.5 GB&#x2F;1 KB is about 1.5 million sentences, and the memory of each file cannot exceed 1.5 million conversations, so we hash each sentence with hash(sentence) % 100, and store the result into a[i] (0&lt;&#x3D;i&lt;&#x3D;99) files, each file stores about 1 million conversations.</p><p>Then use HashMap to count the 100 most frequent conversations in each file, key is the md5 value of the conversation, value is the frequency of occurrence. The final result is 100*100 &#x3D; 10,000 conversations, and then the 10,000 words are heap sorted to find out the Top100 frequency conversations.</p><p><strong>2. Small Top Heap</strong></p><p>Construct a small top heap of size 100, the top element of the heap is the minimum frequency value of the conversation, if in the process of traversing the HashMap, found that the value of the current conversation (the number of times the conversation occurs) is greater than the number of times the top of the heap conversation occurs, then replace the old conversation with a new conversation, and then re-adjust the small top heap.<br>At the end of the traversal, the 100 words on the small top heap are the 100 conversations with the highest frequency of occurrence of the session we need to count.</p><h3 id="4-Extract-the-highest-frequency-IP-from-massive-log-data"><a href="#4-Extract-the-highest-frequency-IP-from-massive-log-data" class="headerlink" title="4).Extract the highest frequency IP from massive log data"></a><strong>4)</strong>.<strong>Extract the highest frequency IP from massive log data</strong></h3><p><strong>Problem Description</strong></p><p>This is an original question from an interview with Baidu, the question is that there is a massive log data containing access IPs saved in a huge file, which can’t be read into the memory, and it is required to extract the IP with the highest number of accesses to Baidu on a certain day.</p><p><strong>Question Answer</strong></p><p><strong>Partitioning + HashMap</strong></p><p>We first use the partition strategy to split the file storing IP data into multiple small files, and then traverse all the small files to extract all the IPs that visited Baidu on that day into a single file.<br>Then use HashMap to count the number of times each IP accesses, through a variable maxCount to store the value of the highest number of accesses, and constantly compared to find the IP with the highest number of accesses.</p><h3 id="5-Count-the-number-of-different-numbers-in-a-large-number-of-phone-numbers"><a href="#5-Count-the-number-of-different-numbers-in-a-large-number-of-phone-numbers" class="headerlink" title="5) Count the number of different numbers in a large number of phone numbers"></a>5) Count the number of different numbers in a large number of phone numbers</h3><h4 id="Problem-Description-3"><a href="#Problem-Description-3" class="headerlink" title="Problem Description"></a><strong>Problem Description</strong></h4><p>Count the number of phone numbers in a file that contains about 10 billion phone numbers, note that the phone numbers may be duplicates.</p><h3 id="Problem-Solution"><a href="#Problem-Solution" class="headerlink" title="Problem Solution"></a><strong>Problem Solution</strong></h3><p><strong>Bitmap method</strong><br>Since the length of a phone number is 11 digits, and there are 10 cases of numbers on each digit, we need to apply a bit array of length 100 billion (10^11), which requires about 12.5GB of memory 10^11 x 1bit. traverse all the numbers, set the array to 1 when the number occurs, and record the bitmap element that is set to 1 for the first time with a count, and the final result will be obtained at the end of the traversal. After the traversal is finished, the final result will be obtained.</p><h3 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h3><p>Massive data problems, basically by the above kinds of classical problems morphing, common solution, <strong>is the partition, Hash, BitMap, Bloom filter and heap sort</strong>. Other types of big data problems may be a combination of various means of solution in different ways.<br>More complex may be based on text deformation, such as the search engine prefix tree structure, big data text retrieval inverted index and so on, due to space limitations, this paper is not listed.<br>However, for the classic problems mentioned above, these several solutions are already enough. If you encounter similar interview questions or problem scenarios, we can be handy:)</p>]]></content>
    
    
    <summary type="html">How do Partitioning, Hash, Bit Map, Bloom Filter, Heap, solve the massive data problem?</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="Big Data" scheme="https://www.nablepart.com/tags/Big-Data/"/>
    
    <category term="Heap" scheme="https://www.nablepart.com/tags/Heap/"/>
    
    <category term="Filter" scheme="https://www.nablepart.com/tags/Filter/"/>
    
    <category term="Bloom" scheme="https://www.nablepart.com/tags/Bloom/"/>
    
    <category term="Partitioning" scheme="https://www.nablepart.com/tags/Partitioning/"/>
    
  </entry>
  
  <entry>
    <title>Tell Mo that I want to use HTTPs.</title>
    <link href="https://www.nablepart.com/04e1022239d3/"/>
    <id>https://www.nablepart.com/04e1022239d3/</id>
    <published>2023-11-06T21:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Introduction</li><li>What are HTTPs?</li><li>Customizing Certificates</li><li>Enabling HTTPs on the server side</li><li>Postscript</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>“The global construction team, anywhere, at any time, will put ‘security’ in the first place! And network project security, in turn, is the top priority of Internet products, so can you say a little bit about how data transmission in project development is secured?”</p><p>Less than recalling, I only heard the interviewer ask slowly and methodically.</p><p>“Heh, construction team, as a new age migrant worker I certainly know that safety comes first. **Wannabe can’t eat fish while constructing! **” At this point, I would like to talk nonsense, but the opposite is a smiling interviewer, not good enough to refuse, so I think for a moment slowly answered: “We are in the project development on-line, in order to interface data transmission security, access may need to use HTTPs”.</p><p>At this point there are small partners have questions, data security access, not with POST request can be?</p><p>The fact is certainly not the case, “we are in the process of interface design, GET request access security is the worst, because the transfer of data will be spliced in the URL to send directly, while the POST request will be the transfer of data into the Body body, not visible in the address bar, but there is no difference between the two in the transmission process. **To achieve secure data transfer, HTTPs must also be used”. **</p><h2 id="2-What-are-HTTPs"><a href="#2-What-are-HTTPs" class="headerlink" title="2. What are HTTPs?"></a>2. What are HTTPs?</h2><h3 id="2-1-Introduction-to-HTTPs"><a href="#2-1-Introduction-to-HTTPs" class="headerlink" title="2.1 Introduction to HTTPs"></a>2.1 Introduction to HTTPs</h3><p>“So tell me about your understanding of HTTPs”, the interviewer didn’t seem to be very surprised by this answer and asked again.</p><p>“In the network, the HTTP protocol communication is ubiquitous, familiar with the RESTful API development program apes know, even if the POST request, in the network transmission due to the reasons such as plaintext transmission, is still not very secure. At this point, we need to use HTTPs, in general terms <strong>HTTPs &#x3D; HTTP + data encryption + authentication + integrity protection</strong>.”</p><blockquote><p>The essence of HTTPs is to add a new layer between the HTTP application layer and the TCP transport layer. Assuming that we use the TLS protocol for HTTPs encryption, HTTPs is the communication between HTTP and TLS, and then the communication between TLS and TCP.</p></blockquote><h3 id="2-2-Encryption-Process"><a href="#2-2-Encryption-Process" class="headerlink" title="2.2 Encryption Process"></a>2.2 Encryption Process</h3><p><img src="https://s2.loli.net/2023/11/07/msIoX1JhrclUuGg.webp"></p><p>I went on to say, “HTTPs encryption is roughly divided into two phases, the <strong>Certificate Validation</strong> and <strong>Data Transfer</strong> phases, which interact as follows:”</p><h4 id="2-2-1-Certificate-verification-process"><a href="#2-2-1-Certificate-verification-process" class="headerlink" title="2.2.1 Certificate verification process"></a>2.2.1 Certificate verification process</h4><ol><li>The browser initiates HTTPs request;</li></ol><p>(2) The server generates a pair of public and private keys, <strong>the private key is stored by the server itself, and the public key is put in the HTTPs certificate and returned to the client</strong>, and the content of the certificate also contains the website address, certificate authority, expiration date, etc. The client verifies the legitimacy of the certificate (<strong>certificate validation</strong> and <strong>data transmission</strong>);</p><p>(3) The client verifies the legitimacy of the certificate (e.g. whether the URL in the certificate is the same as the current URL and whether the certificate has expired), and if it is not legitimate, it prompts an alarm;</p><h4 id="2-2-2-Data-Transmission-Phase"><a href="#2-2-2-Data-Transmission-Phase" class="headerlink" title="2.2.2 Data Transmission Phase"></a>2.2.2 Data Transmission Phase</h4><ol><li>When the certificate is verified to be legal, the client generates a random number as the key of the symmetric algorithm;</li></ol><p>2） <strong>The client encrypts the random number through the public key and transmits it to the server</strong>;</p><ol start="3"><li><p>After receiving the encrypted random number, the server side decrypts the random number through its own private key;</p></li><li><p>After the server gets the symmetric key (random number), it symmetrically encrypts the returned result data.</p></li></ol><h2 id="3-Homemade-certificates"><a href="#3-Homemade-certificates" class="headerlink" title="3. Homemade certificates"></a>3. Homemade certificates</h2><p>“Okay, so if you were asked to design a server for HTTPs, how would you do it?” The interviewer seemed to want to hear more real-world experience, so he went on to ask.</p><p>I was not willing to show weakness, so I added: “After understanding the principle of HTTPs and the authentication process, let’s sort out the three things necessary to open HTTPs secure communication: **CA certificate, server certificate and server private key **.”</p><ol><li>CA (Certification Authority) certificate is issued by a CA organization to prove the legitimacy of an entity’s identity. 2;</li><li>the CA certificate needs to be installed on the client machine, i.e. the browser, to verify the authenticity of the server certificate;</li><li>the server’s certificate is used to send to the client, the server private key to do data encryption and decryption.</li></ol><p>“So, when we want to open an HTTPs service, we need to apply for a certificate first. there are two common software for making homemade certificates: OpenSSL and XCA. the former is a command line interface and the latter is a graphical interface. to make sure that the process is visualized, we can use XCA to make the certificate.”</p><h3 id="3-1-Creating-a-certificate-with-XCA"><a href="#3-1-Creating-a-certificate-with-XCA" class="headerlink" title="3.1 Creating a certificate with XCA"></a>3.1 Creating a certificate with XCA</h3><p>At this point, the clever and attentive reader may realize: if the interviewer wants to see me applying for a certificate, then there is clearly not enough time! However, in order to let the reader better familiarize with the HTTPs encryption process, the thoughtful program ape ❤ ❤ After the interview, we sorted out the following process:</p><p>First, download and install the XCA tool, download address: <a href="https://link.juejin.cn/?target=http://xca.hohnstaedt.de/" title="http://xca.hohnstaedt.de/">xca.hohnstaedt.de&#x2F;</a> , I downloaded the latest <a href="https://link.juejin.cn/?target=https://www.hohnstaedt.de/xca/index.php/download" title="https://www. hohnstaedt.de/xca/index.php/download">XCA version 2.4.0</a>.</p><ol><li>After the installation is completed, we open the xca software, click File -&gt; New Database on the main page (select the storage directory of certificate application, and then enter the password)</li></ol><p><img src="https://s2.loli.net/2023/11/07/kOEGSY2o6Ch3FfQ.webp"></p><ol start="2"><li>Switch to the certificate page and click on create certificate (I have already created a certificate, so it already exists in the certificate field, so please ignore it)</li></ol><p><img src="https://s2.loli.net/2023/11/07/gBQ9VnLrMNuvHmj.webp"></p><ol start="3"><li>On the source page, select Signature, Algorithm, and Template, respectively, and finally click “Apply all information from the template”</li></ol><p><img src="https://s2.loli.net/2023/11/07/LAsSoXvxZ7dKN1n.webp"></p><ol start="4"><li>switch to the main page, fill in the internal name of the certificate, the rest of the options can be filled in casually, and finally generate a new key</li></ol><p><img src="https://s2.loli.net/2023/11/07/tGopyknX2QTBswL.webp"></p><p>(5) Click Create, OK to return, in the newly generated certificate to create a certificate [Note: When choosing the signature, select “Use this CA certificate for signing”, because we are now applying for server-side certificates, so select the template to choose the default TLS_server].</p><p><img src="https://s2.loli.net/2023/11/07/VmsoOwWb2gPtqMG.webp"></p><p>6）Switch to the main body again, fill in the name and other information, click to generate a new key, OK after the completion of the creation to return to the</p><p><img src="https://s2.loli.net/2023/11/07/cjNsRyP6QFbupre.webp"></p><p>At this point, the CA certificate and server certificate are created, and then exported.</p><h3 id="3-2-Exporting-Certificate"><a href="#3-2-Exporting-Certificate" class="headerlink" title="3.2 Exporting Certificate"></a>3.2 Exporting Certificate</h3><ol><li>On the certificate page, export the CA certificate.</li></ol><p><img src="https://s2.loli.net/2023/11/07/hmVOKDbN9tCRW2e.webp"></p><ol start="2"><li>Export server-side certificate</li></ol><p><img src="https://s2.loli.net/2023/11/07/5EM71DNgmTeBAna.webp"></p><ol start="3"><li>On the private key page, export the server-side private key</li></ol><p><img src="https://s2.loli.net/2023/11/07/tXo4gQmjqVvxsup.webp"></p><p>At this point, we are done generating certificates using XCA. Next, turn on HTTPs_ on the <em>server</em> using the certificate and private key.</p><h2 id="4-Enabling-HTTPs-on-the-server-side"><a href="#4-Enabling-HTTPs-on-the-server-side" class="headerlink" title="4. Enabling HTTPs on the server side"></a>4. Enabling HTTPs on the server side</h2><p>The example here is in Golang. Other languages are similar in that you only need to switch to HTTPs when HTTPs are listening for services.</p><p>To control whether HTTPs are turned on in Go, you can add three new parameters to the project startup entry:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">safeMode := flag.Bool(<span class="string">&quot;safe_mode&quot;</span>, <span class="literal">true</span>, <span class="string">&quot;https mode&quot;</span>)</span><br><span class="line">certPath := flag.String(<span class="string">&quot;cert_path&quot;</span>, <span class="string">&quot;D://runSpace//wecom//ca//server.crt&quot;</span>, <span class="string">&quot;server ca&quot;</span>)</span><br><span class="line">keyPath := flag.String(<span class="string">&quot;key_path&quot;</span>, <span class="string">&quot;D://runSpace//wecom//ca//server.pem&quot;</span>, <span class="string">&quot;private key&quot;</span>)</span><br></pre></td></tr></table></figure><p>1）如下图所示</p><p><img src="https://s2.loli.net/2023/11/07/UtulK8kRHoMnFJm.webp"></p><ol start="2"><li>safeMode controls whether HTTPs are turned on or not.</li></ol><p><img src="https://s2.loli.net/2023/11/07/BOAHrMYIaSu8GlC.webp"></p><p>The logs are as follows</p><figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[] Listening <span class="keyword">and</span> serving HTTPS <span class="keyword">on</span> :<span class="number">80</span></span><br></pre></td></tr></table></figure><ol start="3"><li>After the startup is complete, you can type <a href="https://127.0.0.1:80/">https://127.0.0.1:80</a> into your browser to access the</li></ol><p><img src="https://s2.loli.net/2023/11/07/Hdor3V6nSmPxCEl.webp"></p><p>Since the CA certificate is not added to the browser, so the access will be prompted, we choose [Advanced -&gt; continue to go] can be.</p><p>(4) If you do not want to report this kind of prompt every time you visit, we can add the CA certificate just applied for in the browser, Google Chrome import process is as follows</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/b750cb19cc3b48728229e1bb15584668%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>5）【Manage Certificates -&gt; Import】Import the certificate you just applied in XCA, close the browser and restart.</p><p><img src="https://s2.loli.net/2023/11/07/53KOXYbrNAVqiW2.webp"></p><p>At this point, the HTTPs service can be accessed normally. If you want to switch to HTTP access, just set the command line parameter <strong>safe_mode to false</strong>.</p><blockquote><p>The full code is at <a href="https://link.juejin.cn/?target=https://github.com/yangfx15/wecom/blob/main/main.go" title="https://github. com/yangfx15/wecom/blob/main/main.go">GitHub</a>.</p></blockquote><h2 id="5-Postscript"><a href="#5-Postscript" class="headerlink" title="5. Postscript"></a>5. Postscript</h2><p>The interviewer nodded slightly and asked again, “During the HTTPs encryption process, why are private key authentication and data transmission, two ways of asymmetric encryption and symmetric encryption, respectively, used?”</p><p>I smiled wryly, to the interviewer dug a million times the pit, they really all voluntarily stepped up: “This is of course determined by the characteristics of both of them, which:”</p><ul><li>Symmetric encryption: that is, ** both parties to a message use the same key to encrypt and decrypt the message ** using symmetric cryptographic coding techniques. Since the algorithm is public, the key cannot be disclosed to the public. It has a small amount of computation and fast encryption speed. The disadvantages are insecurity and difficulty in key management, such as AES, IDEA.</li><li>Asymmetric encryption: ** can only be encrypted and decrypted by pairs of public and private keys, generally public key encryption, private key decryption **. Process: Party A generates a pair of keys and discloses one of them as a public key; Party B gets the public key, encrypts the data and sends it to Party A, which decrypts it with a dedicated private key. Secure, but slower encryption, such as the RSA algorithm.</li></ul><p>“HTTPs encryption uses <strong>hybrid encryption</strong>, which combines the advantages of symmetric encryption and asymmetric encryption, and transmits the symmetric encrypted public key through asymmetric encryption. In this way, both the security of the public key transmission process [asymmetric encryption] and the efficiency of the data exchange [symmetric encryption] are ensured.”</p><p>The interviewer nodded slightly, “Uh-huh, that’s enough of that question for now, next ……” . With a good start, the interview process my legs are not shaking, the mouth is not shivering, **HTTPs also do not want to learn, the evening call on the old silent together to eat fish! ** emmm really good day ah ~~</p>]]></content>
    
    
    <summary type="html">Years later, in the face of the technical interviewer of Green Alliance Technology, the program ape Xiao ❤ will recall the distant afternoon when the hacker Tom showed him the instant disintegration of the network firewall.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="technical" scheme="https://www.nablepart.com/tags/technical/"/>
    
    <category term="Security" scheme="https://www.nablepart.com/tags/Security/"/>
    
    <category term="network firewall" scheme="https://www.nablepart.com/tags/network-firewall/"/>
    
  </entry>
  
  <entry>
    <title>Swaggo Automated API Documentation (Developer Utility)</title>
    <link href="https://www.nablepart.com/3b0678482067/"/>
    <id>https://www.nablepart.com/3b0678482067/</id>
    <published>2023-11-06T20:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<ul><li><h2 id="Table-of-Contents"><a href="#Table-of-Contents" class="headerlink" title="Table of Contents"></a>Table of Contents</h2><ol><li>Introduction</li><li>Introduction to Swagger</li><li>Building a Swagger Project</li><li>Introducing Swagger UI for rendering document pages</li><li>The flag library controls whether or not the UI page is rendered.</li><li>Summary</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>As a back-end developer in the Internet, the daily 996,007 development work is not the most difficult, the most difficult work is often day after day to hold some boring and inefficient meetings, as well as endless pull-through alignment.</p><p>When each product demand comes, in addition to the coding work is relatively easy, other things such as system design, program review, development coordination and upstream and downstream communication (sib, shuaiguo) work is very time-consuming and laborious.</p><p>If there is one thing that is indispensable in this chain of workflow - it is probably documentation.</p><p>Documents archived for each version of a product include, but are not limited to: product description documents, <strong>system design documents, detailed design documents, interface documents, test documents</strong> and operation and maintenance documents.</p><p>Generally speaking, programmers contact documents at least design documents and interface documents, in some high-performance or high reliability requirements of the business, in addition to the testers, may also need to developers in the test before the output of some of the business of the self-testing documents: including performance testing, stress test documents and so on.</p><p>Thinking about it makes one’s head spin!</p><p>Then, as programmers, how do we solve this series of processes under the bombardment of documents?</p><ul><li>If it is a design document, you may need a variety of UML diagrams to assist, with as few words as possible to express the advantages of clear design;</li><li>If it is a test document, you can combine the test background, test tools and data to show the effectiveness of the test, for example, what changes have been made to the interface, the response has been reduced from 300ms to less than 200ms, and the performance improvement rate of 50%;</li><li>If it is an interface document, then I’m going to introduce this swag tool, or can help you.</li></ul></li></ul><h2 id="2-swagger-introduction"><a href="#2-swagger-introduction" class="headerlink" title="2. swagger introduction"></a>2. swagger introduction</h2><h2 id="2-1-Background"><a href="#2-1-Background" class="headerlink" title="2.1 Background"></a>2.1 Background</h2><p>Years later, facing the code bouncing on the screen, the back-end developer Xiao_❤️ will recall the distant night when he first wrote the interface document. On that day, little ❤️ had just finished writing the code, looking at the evening sun in the sky, thinking that everything was fine and he could get off work soon. Front-end developers small A came over, “hello old brother, may I ask how is the interface development, tomorrow can be linked up?”</p><p>“Emmm although this piece of work is more difficult, but I have been working day and night these days to catch up with the work, tomorrow should be able to interconnect!”</p><p>As soon as Little A heard this, his eyes began to shine, thinking that he could get off work earlier once the intermodulation was over in a couple days. So he started to say excitedly, “Give me an interface document quickly, I’ll add them to the page tonight”.</p><p>At this time, the development manager also came over, “interface documents or very important, involving the consistency and stability of the front and back end interfaces, the need to write and maintain, small ❤️ son you add off work, write the document tonight, it does not take long, estimated half an hour to get it done! In the process of writing to pay attention to the systematization of thinking, think about what you do this thing where the value point? What you do, and the company’s differentiation of other teams in the where? What you do, does it precipitate a set of reusable physical information and methodology? Why are you the one doing it and no one else can? Sink your thinking into a daily, weekly, and monthly report that reflects thinking about the interface documentation thing, not just progress. Okay, get busy!”</p><p>The leader has spoken, how can we say no? Even though 10,000 grass horses are running in my heart, I can only nod my head with a smile and say yes.</p><p>As a result, each unimpressive interface needs to be defined by in-parameters, out-parameters, and examples, so that it can be linked with the front-end and subsequently archived. After much deliberation over the format of the documentation, a final version was created. The documentation for a particular interface looks like this:</p><blockquote><p><em>Interface description: add xxxx business</em>.<br><em>Interface address: &#x2F;api&#x2F;v1&#x2F;resourcexx</em>.<br><em>Request method: POST</em></p></blockquote><p>_Request header parameters _<br><em>Field name</em> <em>Type</em> <em>Required</em> <em>Description</em> <em>Content-Type</em> <em>string</em> <em>yes</em> <em>application&#x2F;json;charset&#x3D;UTF-8</em></p><p><em>Request body parameters: _<br><em>Field Name</em> <em>Type</em> <em>Required</em> <em>Description</em> <em>app_id</em> <em>string</em> <em>Yes</em> <em>application_id</em> <em>source_type</em> <em>integer</em> <em>Yes</em> <em>data_source</em>, 0: local, 1: data-center</em> <em>data_ids</em> <em>string</em> <em>No</em> <em>data_set_id</em> _No</p><p>…………</p><p>_Response parameters: _<br><em>field name</em> <em>type</em> <em>description</em> <em>code</em> <em>integer</em> <em>response code, 200 for return success, 400 pass parameter error, 500 internal error</em> <em>msg</em> <em>string</em> _error message, when the response code is 200, the value is “ok” _ <em>data</em> <em>json</em> _response body, according to the different business to decide the _</p><p><em>Request example:</em></p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">curl <span class="attr">--location</span> <span class="attr">--request</span> POST &#x27;http://<span class="number">127.0</span>.<span class="number">0.1</span>:<span class="number">5187</span>/nlp/addMarketBoard<span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--header &#x27;</span>Content-Type: application/json<span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>file=@<span class="string">&quot;.../测试小组挂断率.xlsx&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>file_name=<span class="string">&quot;002&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>component_type=<span class="string">&quot;0&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>app_id=<span class="string">&quot;1056&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>board_name=<span class="string">&quot;minio测试&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>source_type=<span class="string">&quot;0&quot;</span><span class="string">&#x27; \</span></span><br><span class="line"><span class="string">--form &#x27;</span>data_ids=<span class="string">&quot;67&quot;</span><span class="string">&#x27;</span></span><br></pre></td></tr></table></figure><p>Some people can not help but ask: this is something that should not take too much time, right ~ Indeed, as our leader said, an interface does not look too much text, are some fixed format, copy and paste the work only.</p><p>However, the end of development in addition to my own, there are dozens of interfaces developed together with colleagues ah …… The leader’s idea is that I’ll do it together with the things that I do by hand anyway.</p><p>There is no CPU I’m not too sure, but when the last interface added to the document, I saw the bright moon in the night sky, and then fell into meditation: I do not know whether it was to stay in love with the beauty of the evening sun, or pondering Wang Jiefu “when the moon shines on me back” state of mind.</p><p>Anyway, I don’t want to do this job of writing interface documents for the second time in my life. So, before it was 11 o’clock (lights out at 11 o’clock in the company), I started to search for tools that can automatically generate interface documents, and I came across it - swagger.</p><h3 id="2-2-What-is-swagger"><a href="#2-2-What-is-swagger" class="headerlink" title="2.2 What is swagger?"></a>2.2 What is swagger?</h3><p>Swagger is the development tool framework for the world’s largest OpenAPI specification (OAS), and one of the world’s most popular restful API documentation generation tools, its advantages are:</p><ul><li>support for cross-platform , cross-language</li><li>community open source, and very active</li><li>has a very complete ecosystem (Swagger Editor, Swagger Codegen, Swagger UI …)</li></ul><blockquote><p>restful API: a kind of industry common routing specification design, it is the style of the Internet all the data are regarded as resources, the request URL naming is to locate the resource, the specific operation of the resource by the request method [GET&#x2F;POST&#x2F;DELETE&#x2F;PUT] to decide. For example:<br>GET <a href="https://link.juejin.cn/?target=http://localhost:8080/api/v1/resource" title="http ://localhost:8080/api/v1/resource">http://localhost:8080/api/v1/resource</a> &#x2F;&#x2F; Get a certain resource on a certain local server.</p></blockquote><h3 id="2-3-Introduction-to-the-swagger-tool"><a href="#2-3-Introduction-to-the-swagger-tool" class="headerlink" title="2.3 Introduction to the swagger tool"></a>2.3 Introduction to the swagger tool</h3><p>This time, using Go as an example, we use swaggo as the tool to automatically generate the swagger API documentation and gin-swagger as the rendering package implementation of the Swagger UI.</p><p>The three toolkits to be introduced are as follows:</p><blockquote><p>go get -u github.com&#x2F;swaggo&#x2F;swag&#x2F;cmd&#x2F;swag or go install github.com&#x2F;swaggo&#x2F;swag&#x2F;cmd&#x2F;swag@latest (after go1.70)<br>go get github.com&#x2F;swaggo&#x2F;gin-swagger<br>go get github.com&#x2F;swaggo&#x2F;gin-swagger&#x2F;swaggerFiles</p></blockquote><p>More on when to use them next.</p><h2 id="3-Building-the-Swagger-Sample-Project"><a href="#3-Building-the-Swagger-Sample-Project" class="headerlink" title="3. Building the Swagger Sample Project"></a>3. Building the Swagger Sample Project</h2><h2 id="3-1-Build-a-new-project"><a href="#3-1-Build-a-new-project" class="headerlink" title="3.1 Build a new project"></a>3.1 Build a new project</h2><p>Operating environment:</p><ol><li>Windows 10 (OS optional)</li><li>Goland</li><li>Git</li><li>Go 1.17.11 (optional)</li></ol><p>The whole project operation is realized in Goland, we default readers have installed the Goland&#x2F;Go SDK, and downloaded the Git tool to execute the command operation.</p><h4 id="3-1-1-Creating-a-New-Project-and-Introducing-gin-swagger"><a href="#3-1-1-Creating-a-New-Project-and-Introducing-gin-swagger" class="headerlink" title="3.1.1 Creating a New Project and Introducing gin-swagger"></a>3.1.1 Creating a New Project and Introducing gin-swagger</h4><p>First, let’s create a new project in Goland and name it swagger-test:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/7c830f0d5b3742db933ad5b872ce9937%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Then, go ahead and introduce the go mod into the project to manage the dependencies we’ll be downloading later:</p><blockquote><p>go mod init swagger<br>go mod tidy</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/9644225884ec4b3aba69ab0df155cf90%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Next, introduce the gin-swagger middleware and the built-in files for swagger file management into the project</p><blockquote><p>go get github.com&#x2F;swaggo&#x2F;gin-swagger<br>go get github.com&#x2F;swaggo&#x2F;gin-swagger&#x2F;swaggerFiles</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/72580aa4396945eb83f1f1d7643b2853%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h4 id="3-1-2-Writing-Server-Listening-Code-and-Comments"><a href="#3-1-2-Writing-Server-Listening-Code-and-Comments" class="headerlink" title="3.1.2 Writing Server Listening Code and Comments"></a>3.1.2 Writing Server Listening Code and Comments</h4><p>Since gin-swagger has been introduced for this purpose, we will use the gin framework to handle browser requests in the following.</p><p>First, create a new api package and add the Hello method and annotations, which are the interface methods we will use later:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> api</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">    <span class="string">&quot;net/http&quot;</span></span><br><span class="line"></span><br><span class="line">    <span class="string">&quot;github.com/gin-gonic/gin&quot;</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> Response <span class="keyword">struct</span> &#123;</span><br><span class="line">    Code    <span class="type">uint32</span>      <span class="string">`json:&quot;code&quot;`</span></span><br><span class="line">    Message <span class="type">string</span>      <span class="string">`json:&quot;message&quot;`</span></span><br><span class="line">    Data    <span class="keyword">interface</span>&#123;&#125; <span class="string">`json:&quot;data&quot;`</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Hello</span><span class="params">(c *gin.Context)</span></span> &#123;</span><br><span class="line">    res := Response&#123;Code: <span class="number">1001</span>, Message: <span class="string">&quot;first interface, hello&quot;</span>, Data: <span class="string">&quot;connect success!&quot;</span>&#125;</span><br><span class="line">    c.JSON(http.StatusOK, res)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Among them, the interface annotations in lines 11~25 are manually added according to the definition of swagger, the swaggo tool automatically generates API documentation, these annotations will be used, the basic definition of the annotations are:</p><ul><li>@Summary, interface summary</li><li>@Description, the interface description</li><li>@Tags, the interface tags, used to group the API</li><li>@Accept, the interface accepts the type of input parameter, support mpfd (form), json, etc.</li><li>@Produce, the type of output parameter returned by the interface, supports mpfd (form), json, etc. * @Param, the type of input parameter.</li><li>@Param, the definition of the input parameter, from front to back are:</li></ul><blockquote><p>As shown in the code @Param user_id query string true “User ID” minlength(1) maxlength(100), @Param format is:</p><ol><li>parameter name 2. parameter type 3. data type 4. whether field is required 5. parameter description 6. other attributes</li></ol></blockquote><p>Path and response comments about the interface have:</p><ul><li><p>@Success, specify the data of the success response, in the format of 1.HTTP response code 2.response parameter type 3.response data type 4.other description</p></li><li><p>@Failure, specifies the data after a failure response, same as Success</p></li><li><p>@Router, specifying the route and HTTP method.</p></li></ul><p>More fields can be found in the Swaggo documentation: [[gitcode.net&#x2F;mirrors&#x2F;swa…](https:&#x2F;&#x2F; gitcode.net&#x2F;mirrors&#x2F;swaggo&#x2F;swag&#x2F;-&#x2F;blob&#x2F;master&#x2F;README_zh-cn.md)] </p><p>Next, create a new main.go in the project’s home directory, specifying the api&#x2F;v1&#x2F;hello interface to point to the Hello method in the api package:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">    <span class="string">&quot;swagger/api&quot;</span></span><br><span class="line"></span><br><span class="line">    <span class="string">&quot;github.com/gin-gonic/gin&quot;</span></span><br><span class="line">    swaggerFiles <span class="string">&quot;github.com/swaggo/files&quot;</span></span><br><span class="line">    ginSwagger <span class="string">&quot;github.com/swaggo/gin-swagger&quot;</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    r := gin.Default()</span><br><span class="line">    r.GET(<span class="string">&quot;/swagger/*any&quot;</span>, ginSwagger.WrapHandler(swaggerFiles.Handler))</span><br><span class="line"></span><br><span class="line">    v1 := r.Group(<span class="string">&quot;/api/v1&quot;</span>)</span><br><span class="line">    &#123;</span><br><span class="line">       v1.GET(<span class="string">&quot;/hello&quot;</span>, api.Hello)</span><br><span class="line">   &#125;</span><br><span class="line">    r.Run(<span class="string">&quot;:8080&quot;</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Since we are going to generate the swagger UI page through the main method, we need to add comments to the page definition above the main method, common fields are:</p><ul><li>title, the title of the swagger UI.</li><li>version, the version of the interface</li><li>description, the description of the swagger document</li><li>host, the host address of the record</li><li>BasePath, the base path, which will be displayed in the swagger UI with host automatically spliced in.</li></ul><p>After defining the swagger UI page with the above comments, we need to specify a route to access it:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">r</span> := gin.<span class="built_in">Default</span>()</span><br><span class="line">r.<span class="built_in">GET</span>(<span class="string">&quot;/swagger/*any&quot;</span>, ginSwagger.<span class="built_in">WrapHandler</span>(swaggerFiles.Handler))</span><br></pre></td></tr></table></figure><p>gin.Default() defines an initialization engine for gin, and &#x2F;swagger&#x2F;*any serves as the access route. After generating the swagger file and starting the service, you can access the swagger UI page in a browser via 127.0.0.1:8080&#x2F;swagger&#x2F;index.html, with &#x2F;swagger being the path prefix.</p><p>Next, we’ll add a new business access interface. We’ll add a new route group &#x2F;api&#x2F;vi as a prefix, and under this route group we’ll add a &#x2F;hello interface, which will point to the Hello method under the api package. Finally, we specify an 8080 port to listen on.</p><h3 id="3-2-Generating-API-Documentation-with-the-swaggo-Tool"><a href="#3-2-Generating-API-Documentation-with-the-swaggo-Tool" class="headerlink" title="3.2 Generating API Documentation with the swaggo Tool"></a>3.2 Generating API Documentation with the swaggo Tool</h3><h4 id="3-2-1-Installing-the-swaggo-tool"><a href="#3-2-1-Installing-the-swaggo-tool" class="headerlink" title="3.2.1 Installing the swaggo tool"></a>3.2.1 Installing the swaggo tool</h4><p>Once you have defined all of this, you can install the swaggo tool to automatically generate the swagger files. There are many ways to do this, but we’ll use the Git method here.</p><p>We need to download and install git on our computer first, here we have git installed by default and configure the environment, then install swaggo:</p><p>(2.0) Linux installation:</p><blockquote><p>go get -u github.com&#x2F;swaggo&#x2F;swag&#x2F;cmd&#x2F;swag (before go1.70)<br>go install github.com&#x2F;swaggo&#x2F;swag&#x2F;cmd&#x2F;swag@latest (after go1.70)</p></blockquote><p>If the installation is unsuccessful in Linux environment, it may be caused by the inconsistency between GOPATH path and GOROOT, you can use <code>vi /etc/profile</code> command to modify it:</p><blockquote><p>export GOROOT&#x3D;&#x2F;usr&#x2F;local&#x2F;go ##GoLang installation directory.<br>export PATH&#x3D;$GOROOT&#x2F;bin:$PATH<br>export GOPATH&#x3D;&#x2F;usr&#x2F;local&#x2F;go ##GoLang project directory</p></blockquote><p>Then refresh GOPATH: <code>source /etc/profile</code></p><p>2.1）windows Installation method (this method is selected by default below)：</p><blockquote><p>go get -u github.com&#x2F;swaggo&#x2F;swag&#x2F;cmd&#x2F;swag</p></blockquote><p>2.2）macOS Mounting method:</p><blockquote><p>mv $GOPATH&#x2F;bin&#x2F;swag &#x2F;usr&#x2F;local&#x2F;go&#x2F;bin</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/95b4dfe1196444bf951dfa5b4748219f%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Test if the installation was successful:</p><blockquote><p>swag -v</p></blockquote><h4 id="3-2-2-Generating-API-Documentation-with-swaggo"><a href="#3-2-2-Generating-API-Documentation-with-swaggo" class="headerlink" title="3.2.2 Generating API Documentation with swaggo"></a>3.2.2 Generating API Documentation with swaggo</h4><p>After the installation of swag is complete, we go to the home directory of the swagger-test project [that is, the root directory where the project was created, in the example, D:\runSpace\swagger-test], and use the command to generate the swagger file:</p><blockquote><p>swag init</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/50cb87d2a7b34db0ba52bad1c14600f1%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>At this point, we can see that the project has automatically generated a docs directory with the newly generated swagger files:</p><blockquote><p>.&#x2F;docs<br>├── docs.go<br>├── swagger.json<br>└── swagger.yaml</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ed9544c3f6dd4e4b8cc7e817076a46ad%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h4 id="3-2-3-Importing-API-Documentation-from-the-docs-Package"><a href="#3-2-3-Importing-API-Documentation-from-the-docs-Package" class="headerlink" title="3.2.3 Importing API Documentation from the docs Package"></a>3.2.3 Importing API Documentation from the docs Package</h4><p>Next, we add the path to the just-generated docs package in the main.go file, and import this path so that the API documentation can be accessed after the service is started:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c452779fe4e14a529d1bd5cf22a3a17d%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Then, we start the project with go run main.go:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/dc37555102d049bebbc58989cfbe10d7%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>You can see that port 8080 is up.</p><h2 id="4-Swagger-Document-Rendering-Demonstration-and-Testing"><a href="#4-Swagger-Document-Rendering-Demonstration-and-Testing" class="headerlink" title="4. Swagger Document Rendering Demonstration and Testing"></a>4. Swagger Document Rendering Demonstration and Testing</h2><h3 id="4-1-Swagger-UI-Page-Access"><a href="#4-1-Swagger-UI-Page-Access" class="headerlink" title="4.1 Swagger UI Page Access"></a>4.1 Swagger UI Page Access</h3><h4 id="4-1-1-Server-Connectivity-Testing"><a href="#4-1-1-Server-Connectivity-Testing" class="headerlink" title="4.1.1 Server Connectivity Testing"></a>4.1.1 Server Connectivity Testing</h4><p>First, we access the hello interface in the browser:</p><blockquote><p>http:&#x2F;&#x2F;<a href="https://link.juejin.cn/?target=http://127.0.0.1:8080/api/v1/hello" title="http://127.0.0.1:8080/api/v1/hello">127.0.0.1:8080&#x2F;api&#x2F;v1&#x2F;hello</a></p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f6f6f5161b784174976180d4ce8d5480%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Successful access indicates that the port is up.</p><h4 id="4-1-2-Swagger-UI-Page-Access"><a href="#4-1-2-Swagger-UI-Page-Access" class="headerlink" title="4.1.2 Swagger UI Page Access"></a>4.1.2 Swagger UI Page Access</h4><p>Next, we open the swagger UI page to view the API documentation for the interface.</p><blockquote><p>Path：http:&#x2F;&#x2F;<a href="https://link.juejin.cn/?target=http://127.0.0.1:8080/api/v1/hello" title="http://127.0.0.1:8080/api/v1/hello">127.0.0.1:8080&#x2F;</a>swagger&#x2F;index.html</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/297e43d85eff45f0b36ed87247a5c1af%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As you can see, the swagger UI page is now accessible. It contains a basic description of swagger and API interface information, and you can also view the doc.json interface definition online:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/73ae485e727849ad9335760ffd7c4a8b%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Here, we can see the power of swagger, just through a few comments and route definitions, you can write what we need to define in the interface document.</p><p>Moreover, the swagger API documentation is universal, almost all front-end, back-end and testing staff can understand. If there is a modification of the interface in the development process, we only need to change the comments, and then use the swag init command to update the docs, and then restart the service. Then we can dump the updated docs <a href="http://ip+port/swagger/index.html">http://ip+port/swagger/index.html</a> to the front-end or testers.</p><h3 id="4-2-API-Testing"><a href="#4-2-API-Testing" class="headerlink" title="4.2 API Testing"></a>4.2 API Testing</h3><p>In addition to API documentation, swagger UI also provides interface testing, we can see a Try it out button on the right side of the &#x2F;hello interface “Parameters” on the homepage:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/166f8aa35e7b466489d6157f622ea3e8%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>点击之后可以输入测试用例，输入参数后点击 Execute 执行：</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/317d9c44a47e45e791085cd2a577d5d5%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>The execution steps are:</p><ol><li>Click Try it out button to start the test. 2.</li><li>Input the interface input parameter, there is only one input parameter in the example, and it is not business related, so you can fill in the parameter as you like.</li><li>Click the Execute button to execute the test.</li><li>Swagger UI generates a curl command to call the server-side interface. 5.</li><li>Return the response result: success&#x2F;fail.</li></ol><p>See here, you must have understood the basic usage of swaggo tool. However, the API documentation is generally only needed when we do testing in the development environment, after the service is online, most of the interface documentation is to be invisible to the user. Moreover, updating the swagger documentation every time a service goes live will also affect the speed of our service compilation.</p><p>Therefore, we can use the flag library’s command parsing to control whether or not to hide the swagger UI interfaces at runtime.</p><h2 id="5-The-flag-library-controls-whether-to-render-the-Swagger-UI"><a href="#5-The-flag-library-controls-whether-to-render-the-Swagger-UI" class="headerlink" title="5. The flag library controls whether to render the Swagger UI."></a>5. The flag library controls whether to render the Swagger UI.</h2><h3 id="5-1-Adding-flag-parameter-control"><a href="#5-1-Adding-flag-parameter-control" class="headerlink" title="5.1 Adding flag parameter control"></a>5.1 Adding flag parameter control</h3><p>To control whether or not to render the UI page, we use a bool type field called swagger. The code for the main function above reads:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    swaggerTag := flag.Bool(<span class="string">&quot;swagger&quot;</span>, <span class="literal">false</span>, <span class="string">&quot;Whether to generate swagger document at build time&quot;</span>)</span><br><span class="line">    flag.Parse()</span><br><span class="line"></span><br><span class="line">    r := gin.Default()</span><br><span class="line">    <span class="keyword">if</span> swaggerTag != <span class="literal">nil</span> &amp;&amp; *swaggerTag &#123;</span><br><span class="line">       r.GET(<span class="string">&quot;/swagger/*any&quot;</span>, ginSwagger.WrapHandler(swaggerFiles.Handler))</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">    v1 := r.Group(<span class="string">&quot;/api/v1&quot;</span>)</span><br><span class="line">    &#123;</span><br><span class="line">       v1.GET(<span class="string">&quot;/hello&quot;</span>, api.Hello)</span><br><span class="line">   &#125;</span><br><span class="line">    r.Run(<span class="string">&quot;:8080&quot;</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Using the flag library’s command parsing, you can pass in swagger commands to control whether or not to expose the swagger UI interface each time you run the service.</p><p>First, compile the main function into an executable called main.exe, and then run the main.exe executable (do the following in git):</p><blockquote><p>go build main.go<br>.&#x2F;main.exe</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e287ef995c0440f290d5d0eb7f21fa25%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>The service started successfully.</p><h3 id="5-2-Accessing-a-flag-controlled-server"><a href="#5-2-Accessing-a-flag-controlled-server" class="headerlink" title="5.2 Accessing a flag-controlled server"></a>5.2 Accessing a flag-controlled server</h3><h4 id="5-2-1-Without-adding-swagger-parameters"><a href="#5-2-1-Without-adding-swagger-parameters" class="headerlink" title="5.2.1 Without adding swagger parameters"></a>5.2.1 Without adding swagger parameters</h4><p>Next, we access the Swagger UI page by entering the address <a href="http://ip+port/swagger/index.html">http://ip+port/swagger/index.html</a> into our browser. We find that the server returns the response code 404: Interface does not exist.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/fbf696c0dd0f4f64a04aa34133265ab1%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>It seems that the swagger UI page cannot be accessed without specifying the swagger parameter at runtime.</p><h4 id="5-2-2-Adding-the-swagger-parameter"><a href="#5-2-2-Adding-the-swagger-parameter" class="headerlink" title="5.2.2 Adding the swagger parameter"></a>5.2.2 Adding the swagger parameter</h4><p>Next, let’s add the swagger parameter at runtime to turn on access control for the UI page:</p><p>. &#x2F;main.exe -swagger&#x3D;true<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/1cb806212588432597bea8ee54eddea3~tplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As you can see, the swagger UI access was successful.</p><h2 id="6-Summary"><a href="#6-Summary" class="headerlink" title="6. Summary"></a>6. Summary</h2><p>Summary of Swagger auto-generated docs:</p><ul><li>When using the swag init command to generate documentation, the docs directory must be referenced in the service main, otherwise the API information cannot be accessed after running;</li><li>Swagger UI access is controlled by ginSwagger.WrapHandler, which can be turned on or off by using the flag library parameter command;</li><li>After modifying the interface definition, you need to use swag init to update the swagger file immediately, and then restart the service to take effect.</li></ul><p>Swagger automatically generates documentation from the interface annotations, remember a few commonly used can be used in conjunction with the official documentation to use the best. This idea of annotative programming makes our code more readable, and can be seen in widely used frameworks in the industry, such as Spring Boot, Mybatis, and so on.</p><p>At the end of the day, development is a mental job, but more than that, it’s a physical job. Therefore, the lazy programming, automated programming approach I personally respect. Extra time to brush the blog, look at the technical articles do not smell it ~ and then do not help, go to the forum to see the black talk discourse, the future to fool the leadership or CPU interns may be useful (🐕)</p><ol start="3"><li><a href="https://link.juejin.cn/?target=https://blog.csdn.net/csdnno11/article/details/119634795">https://link.juejin.cn?target=https%3A%2F%2Fblog.csdn.net%2Fcsdnno11%2Farticle%2Fdetails%2F119634795</a> “<a href="https://blog.csdn.net/csdnno11/article/details/119634795">https://blog.csdn.net/csdnno11/article/details/119634795</a>“)</li></ol>]]></content>
    
    
    <summary type="html">Swaggo is the Go language automatically generates swagger document tool ,developers can use it to easily implement 0 document programming</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="Go" scheme="https://www.nablepart.com/tags/Go/"/>
    
    <category term="Swagger" scheme="https://www.nablepart.com/tags/Swagger/"/>
    
    <category term="implemen" scheme="https://www.nablepart.com/tags/implemen/"/>
    
    <category term="Automated" scheme="https://www.nablepart.com/tags/Automated/"/>
    
    <category term="Documentation" scheme="https://www.nablepart.com/tags/Documentation/"/>
    
  </entry>
  
  <entry>
    <title>Sorting Algorithm Questions</title>
    <link href="https://www.nablepart.com/df77bfcbc2e5/"/>
    <id>https://www.nablepart.com/df77bfcbc2e5/</id>
    <published>2023-11-06T19:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Common Sorting Algorithms</li><li>Getting Started</li><li>Sorting Chained Tables</li><li>Merge intervals</li><li>Advanced Topics:</li><li>kth largest element in an array</li><li>Finding the median of two positively ordered arrays</li><li>Summary</li></ol><h2 id="1-Common-Sorting-Algorithms"><a href="#1-Common-Sorting-Algorithms" class="headerlink" title="1. Common Sorting Algorithms"></a>1. Common Sorting Algorithms</h2><p>Common sorting algorithms include Insertion Sort [O(n^2)], Summation Sort [O(nlogn)], Heap Sort [O(nlogn)], and Quick Sort [O(nlogn) -&gt; O(n^2)].</p><p>If you are not familiar with the sorting algorithms, you can learn them on <a href="https://visualgo.net/en/sorting%22">Visualgo Algorithmic Data Structures</a>, which has videos and algorithm definitions, so I won’t repeat them in the article.</p><h2 id="2-Introductory-topics"><a href="#2-Introductory-topics" class="headerlink" title="2. Introductory topics"></a>2. Introductory topics</h2><h4 id="1-Sorted-Chained-Lists"><a href="#1-Sorted-Chained-Lists" class="headerlink" title="1) Sorted Chained Lists"></a>1) Sorted Chained Lists</h4><p>Power button topic 148:</p><p><img src="https://s2.loli.net/2023/11/07/n7rJVAGXWEs8fbg.webp"></p><p>This is leetcode question 148, which is simple: sort an unordered linked table, with the advanced requirement: sort the linked table in <code>O(nlogn)</code> time complexity and constant-level space complexity.</p><p>Among the sorting algorithms <strong>The algorithms with O(nlongn) time complexity are subsumption sort, heap sort, and fast sort</strong>, but the worst time complexity of fast sort is O(n^2) which is not applicable in this problem, and heap sort is a better sorting algorithm for chained lists compared to subsumption sort.</p><p>So, we use the subsumption sort to realize this question, which is based on the idea of <strong>Partition Recursion</strong> with the following steps:</p><ol><li>divide the chain table from the middle node into two, you can use fast and slow pointer way to realize;</li><li>sort the two sub-chained tables separately;</li><li>the sorted list for the merger, you can get a complete sorted list.</li></ol><p>The Go code is as follows:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">func sortList(head *ListNode) *ListNode &#123;</span><br><span class="line">    if head == nil || head<span class="selector-class">.Next</span> == nil &#123;</span><br><span class="line">        return head</span><br><span class="line">    &#125;</span><br><span class="line">    // 快慢指针，找到链表中间节点</span><br><span class="line">    slow, fast := head, head</span><br><span class="line">    for fast.Next != nil &amp;&amp; fast.Next.Next != nil &#123;</span><br><span class="line">        fast = fast<span class="selector-class">.Next</span><span class="selector-class">.Next</span></span><br><span class="line">        slow = slow<span class="selector-class">.Next</span></span><br><span class="line">    &#125;</span><br><span class="line">    next := slow.Next</span><br><span class="line">    slow.Next = nil</span><br><span class="line">    return <span class="built_in">mergeTwoList</span>(<span class="built_in">sortList</span>(head), <span class="built_in">sortList</span>(next))</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">// mergeTwoList 合并两个子链表</span><br><span class="line">func mergeTwoList(<span class="selector-tag">h1</span>, <span class="selector-tag">h2</span> *ListNode) *ListNode &#123;</span><br><span class="line">    pre := <span class="built_in">new</span>(ListNode)</span><br><span class="line">    hair := pre</span><br><span class="line">    for h1 != nil &amp;&amp; h2 != nil &#123;</span><br><span class="line">        if <span class="selector-tag">h1</span><span class="selector-class">.Val</span> &lt; <span class="selector-tag">h2</span><span class="selector-class">.Val</span> &#123;</span><br><span class="line">            pre<span class="selector-class">.Next</span> = <span class="selector-tag">h1</span></span><br><span class="line">            <span class="selector-tag">h1</span> = <span class="selector-tag">h1</span><span class="selector-class">.Next</span></span><br><span class="line">        &#125; else &#123;</span><br><span class="line">            pre<span class="selector-class">.Next</span> = <span class="selector-tag">h2</span></span><br><span class="line">            <span class="selector-tag">h2</span> = <span class="selector-tag">h2</span><span class="selector-class">.Next</span></span><br><span class="line">        &#125;</span><br><span class="line">        pre = pre<span class="selector-class">.Next</span></span><br><span class="line">    &#125;</span><br><span class="line">    if <span class="selector-tag">h1</span> != nil &#123;</span><br><span class="line">        pre<span class="selector-class">.Next</span> = <span class="selector-tag">h1</span></span><br><span class="line">    &#125;</span><br><span class="line">    if <span class="selector-tag">h2</span> != nil &#123;</span><br><span class="line">        pre<span class="selector-class">.Next</span> = <span class="selector-tag">h2</span></span><br><span class="line">    &#125;</span><br><span class="line">    return hair<span class="selector-class">.Next</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The code is very simple and efficient to write in recursive fashion:</p><p><img src="https://s2.loli.net/2023/11/07/itN1aSVIZf6WJOs.webp"></p><h4 id="2-Merging-intervals"><a href="#2-Merging-intervals" class="headerlink" title="2) Merging intervals"></a>2) Merging intervals</h4><p>Force Buckle Topic 56:</p><p><img src="https://s2.loli.net/2023/11/07/PhBNYRkCbdumDXT.webp"></p><p>The general idea is to merge an array of intervals so that there are no overlapping intervals after the merge. Suppose there are intervals [1,3] and [2,6], which should be merged into [1,6].</p><p>The idea of solving this problem is to compare the 2nd element of all the subarrays with the 1st element of the array that follows, and if there is a crossover, find a larger element to be the 2nd element of the new array. That is, when comparing <code>[1, 3] &amp;#x548C; [2, 6]</code>, we first compare 3 is greater than 2, so we can see that there is a crossover between the two arrays, and 6 is greater than 3, so the 2nd element of the new array is 6 &#x3D;&gt; <code>[1, 6]</code>.</p><p>How can we be sure that the first element is unique? For example, if we have 2 arrays: <code>[3, 5], [1, 8]</code>, we’ll have to determine the size of the first element when we merge them, so for simplicity we can sort all the arrays first, making sure that the array with the smaller first element comes first. In Go, you can implement array sorting with simple functions:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">sort.Slice(arr, <span class="function"><span class="keyword">func</span><span class="params">(i, j <span class="type">int</span>)</span></span> <span class="type">bool</span> &#123;</span><br><span class="line">    <span class="keyword">return</span> arr[i][<span class="number">0</span>] &lt; arr[j][<span class="number">0</span>]</span><br><span class="line">&#125;)</span><br></pre></td></tr></table></figure><p>Next, we can write the full code:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> <span class="string">&quot;sort&quot;</span></span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">merge</span><span class="params">(intervals [][]<span class="type">int</span>)</span></span> [][]<span class="type">int</span> &#123;</span><br><span class="line"></span><br><span class="line">    sort.Slice(intervals, <span class="function"><span class="keyword">func</span><span class="params">(i, j <span class="type">int</span>)</span></span><span class="type">bool</span>&#123;</span><br><span class="line">        <span class="keyword">return</span> intervals[i][<span class="number">0</span>] &lt; intervals[j][<span class="number">0</span>]</span><br><span class="line">    &#125;)</span><br><span class="line">    <span class="keyword">var</span> res [][]<span class="type">int</span></span><br><span class="line">    <span class="keyword">for</span> _, interval := <span class="keyword">range</span> intervals&#123;</span><br><span class="line">        <span class="keyword">var</span> size = <span class="built_in">len</span>(res)</span><br><span class="line"></span><br><span class="line">        <span class="keyword">if</span> size &gt; <span class="number">0</span> &amp;&amp; interval[<span class="number">0</span>] <span class="number">-1</span>][<span class="number">1</span>] &#123;</span><br><span class="line">            res[size<span class="number">-1</span>][<span class="number">1</span>] = max(res[size<span class="number">-1</span>][<span class="number">1</span>], interval[<span class="number">1</span>])</span><br><span class="line">        &#125;<span class="keyword">else</span>&#123;</span><br><span class="line">            res = <span class="built_in">append</span>(res, interval)</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> res</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">max</span><span class="params">(x <span class="type">int</span>, y <span class="type">int</span>)</span></span> <span class="type">int</span> &#123;</span><br><span class="line">    <span class="keyword">if</span> x &lt; y &#123;</span><br><span class="line">        <span class="keyword">return</span> y</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> x</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>More than 94% efficient golang commits:</p><p><img src="https://s2.loli.net/2023/11/07/pAWK23rjuYncdFM.webp"></p><h2 id="3-Advanced-Topics"><a href="#3-Advanced-Topics" class="headerlink" title="3. Advanced Topics"></a>3. Advanced Topics</h2><h2 id="1-Finding-the-Kth-largest-element-of-an-array"><a href="#1-Finding-the-Kth-largest-element-of-an-array" class="headerlink" title="1) Finding the Kth largest element of an array"></a>1) Finding the Kth largest element of an array</h2><p>Power Buckle topic 215:</p><p><img src="https://s2.loli.net/2023/11/07/C1Ls9EgmNOtFqTk.webp"></p><p>K largest element of the array, this question is the Internet factory interview questions, I have been tested in the byte, Baidu interview, so you must be proficient.</p><p>The meaning of the topic is very simple, is to find the kth largest element of the array sorted, such as [1,2,3,4,5], k &#x3D; 2, then the second largest element is 4. If there is a topic with the same elements, we do not have to pay attention to, for example, [5,5,4,3,1], k &#x3D; 2, then the second largest element is 5, rather than 4.</p><p><strong>The advanced requirement is</strong>: design and implement an algorithm with <code>O(n)</code> time complexity to solve this problem.</p><p>This problem is a classic one in sorting algorithms, in which, if we don’t require time complexity, we can just use all kinds of sorting algorithms to first sort the array and then get the kth largest element. However, whether in interviews or in daily use, we need to pursue the optimal time complexity of the algorithm, and efficient sorting algorithms are heap sort (nlogn), subsumption sort (nlogn) and quick sort (nlogn -&gt; n^2).</p><p>How to satisfy the time complexity O(n)?</p><p><strong>Quick sort</strong></p><p>The answer is mentioned in 9.2 of Introduction to Algorithms, Third Edition, where the <code>randomized_select</code> algorithm is a selection algorithm that expects linear time, and is modeled after fast sort, where the idea is still recursive partitioning. Unlike fast sort, however, the <code>randomized_select</code> algorithm deals only with one side of the partition, which we describe in code:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line">func findKthLargest(nums <span class="selector-attr">[]</span>int, k int) int &#123;</span><br><span class="line">    rand<span class="selector-class">.Seed</span>(<span class="selector-tag">time</span><span class="selector-class">.Now</span>()<span class="selector-class">.UnixNano</span>())</span><br><span class="line">    return quickSelect(nums, <span class="number">0</span>, len(nums)-<span class="number">1</span>, len(nums)-k)</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func quickSelect(<span class="selector-tag">a</span> <span class="selector-attr">[]</span>int, l, <span class="attribute">r</span>, index int) int &#123;</span><br><span class="line">    <span class="selector-tag">q</span> := <span class="built_in">partition</span>(a, l, r)</span><br><span class="line">    if q == index &#123;</span><br><span class="line">        return <span class="selector-tag">a</span><span class="selector-attr">[q]</span></span><br><span class="line">    &#125; else if <span class="selector-tag">q</span> &lt; index &#123;</span><br><span class="line">        // 根据partition元素，只处理划分的一边</span><br><span class="line">        return quickSelect(<span class="selector-tag">a</span>, <span class="selector-tag">q</span> + <span class="number">1</span>, <span class="attribute">r</span>, index)</span><br><span class="line">    &#125;</span><br><span class="line">    return quickSelect(<span class="selector-tag">a</span>, l, <span class="selector-tag">q</span> - <span class="number">1</span>, index)</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func partition(<span class="selector-tag">a</span> <span class="selector-attr">[]</span>int, l, <span class="attribute">r</span> int) int &#123;</span><br><span class="line">    // 选取随机数作为边界元素</span><br><span class="line">    <span class="selector-tag">i</span> := rand.<span class="built_in">Intn</span>(r-l+<span class="number">1</span>) + l</span><br><span class="line">    a[i], a[r] = a[r], a[i]</span><br><span class="line"></span><br><span class="line">    x := a[r]</span><br><span class="line">    i := l - <span class="number">1</span></span><br><span class="line">    for j := l; j &lt; <span class="attribute">r</span>; j++ &#123;</span><br><span class="line">        if <span class="selector-tag">a</span><span class="selector-attr">[j]</span> <span class="selector-tag">i</span>++</span><br><span class="line">            <span class="selector-tag">a</span><span class="selector-attr">[i]</span>, <span class="selector-tag">a</span><span class="selector-attr">[j]</span> = <span class="selector-tag">a</span><span class="selector-attr">[j]</span>, <span class="selector-tag">a</span><span class="selector-attr">[i]</span></span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="selector-tag">a</span><span class="selector-attr">[i+1]</span>, <span class="selector-tag">a</span><span class="selector-attr">[r]</span> = <span class="selector-tag">a</span><span class="selector-attr">[r]</span>, <span class="selector-tag">a</span><span class="selector-attr">[i+1]</span></span><br><span class="line">    return <span class="selector-tag">i</span> + <span class="number">1</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Submission of results:</p><p><img src="https://s2.loli.net/2023/11/07/3bJOeIBUTMpFdHL.webp"></p><p>We found that the optimized algorithm randomized_select time complexity for fast sort exceeds 99% of users.</p><p><strong>Heap Sort</strong></p><p>Next we implement it again with heap sort, which is a sorting algorithm designed to take advantage of the data structure <strong>heap</strong>. A heap is a structure that approximates a complete binary tree and simultaneously satisfies the heap form: i.e., a child node’s key value is always less than (or greater than) the key value of its parent node:</p><ul><li>Small top heap: each node’s value is less than or equal to the value of its child node;</li><li>big top heap: each node’s value is greater than or equal to the value of its child node.</li></ul><p>This question is to find the largest kth number, so we build a big top heap, each time we find the largest number, remove the largest number (implementation: the top of the heap and the end of the heap elements are interchanged, and then the size of the heap will be subtracted by one), and then find the k-1th number, the code is as follows:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br></pre></td><td class="code"><pre><span class="line">func findKthLargest(nums <span class="selector-attr">[]</span>int, k int) int &#123;</span><br><span class="line">    heapSize := <span class="built_in">len</span>(nums)</span><br><span class="line">    <span class="built_in">buildMaxHeap</span>(nums, heapSize)</span><br><span class="line">    for i := <span class="built_in">len</span>(nums) - <span class="number">1</span>; <span class="selector-tag">i</span> &gt;= len(nums)+<span class="number">1</span>-k; <span class="selector-tag">i</span>-- &#123;</span><br><span class="line">        nums<span class="selector-attr">[0]</span>, nums<span class="selector-attr">[i]</span> = nums<span class="selector-attr">[i]</span>, nums<span class="selector-attr">[0]</span></span><br><span class="line">        heapSize--</span><br><span class="line">        maxHeapify(nums, <span class="number">0</span>, heapSize)</span><br><span class="line">    &#125;</span><br><span class="line">    return nums<span class="selector-attr">[0]</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func buildMaxHeap(nums <span class="selector-attr">[]</span>int, heapSize int) &#123;</span><br><span class="line">    for <span class="selector-tag">i</span> := heapSize / <span class="number">2</span>; <span class="selector-tag">i</span> &gt;= <span class="number">0</span>; <span class="selector-tag">i</span>-- &#123;</span><br><span class="line">        maxHeapify(nums, <span class="selector-tag">i</span>, heapSize)</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func maxHeapify(nums <span class="selector-attr">[]</span>int, <span class="selector-tag">i</span>, headSize int) &#123;</span><br><span class="line">    l, <span class="attribute">r</span>, largest := i*<span class="number">2</span>+<span class="number">1</span>, i*<span class="number">2</span>+<span class="number">2</span>, i</span><br><span class="line">    if l &lt; headSize &amp;&amp; nums[l] &gt; nums[largest] &#123;</span><br><span class="line">        largest = l</span><br><span class="line">    &#125;</span><br><span class="line">    if <span class="attribute">r</span> &lt; headSize &amp;&amp; nums<span class="selector-attr">[r]</span> &gt; nums<span class="selector-attr">[largest]</span> &#123;</span><br><span class="line">        largest = <span class="attribute">r</span></span><br><span class="line">    &#125;</span><br><span class="line">    if largest != <span class="selector-tag">i</span> &#123;</span><br><span class="line">        nums<span class="selector-attr">[i]</span>, nums<span class="selector-attr">[largest]</span> = nums<span class="selector-attr">[largest]</span>, nums<span class="selector-attr">[i]</span></span><br><span class="line">        maxHeapify(nums, largest, headSize)</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>As you can see, heap sort has a time complexity of O(nlogn), which is a bit more time-consuming than <code>randomized_select</code> for fast sort:</p><p><img src="https://s2.loli.net/2023/11/07/yBus9PRrZmHxjQA.webp"></p><h3 id="2-Find-the-median-of-two-positively-ordered-arrays"><a href="#2-Find-the-median-of-two-positively-ordered-arrays" class="headerlink" title="2) Find the median of two positively ordered arrays"></a>2) Find the median of two positively ordered arrays</h3><p>Force Buckle topic 4:</p><p><img src="https://s2.loli.net/2023/11/07/PMi7WGlSxc5aIem.webp"></p><p>The general idea of the question is to find the median of two ordered arrays nums1 and nums2, and requires a time complexity of , which assumes that the two arrays will not be empty at the same time.</p><p>For this problem, the first and easiest way to think of is to merge the two arrays and take out the median. However, the operation of merging the arrays has a complexity of m+n, which does not meet the requirements of the question. Seeing the time complexity of log, we associate it with <strong>binary search</strong>. 1.</p><ol><li>When the total number of two arrays is odd, the middle number is the median; when the total number is even, the mean of the two middle numbers is the median. 2;</li><li>know the length of the array, to find the median of the two arrays combined, only need to bisect the search to find the middle position of the array of 1 or 2 elements;</li><li>As the location of the middle number to determine, so the two arrays only need to ** bisect the search for an array ** can be, in order to minimize the complexity of our search for the length of the smaller arrays;</li><li>the most critical: bisection cut should be how to determine whether the end, we assume that there is an array A and array B (length of m and n, respectively), the first bisection cut on the array A, that cut point i, and j [due to i + j &#x3D; (m + n)&#x2F;2, so j &#x3D; (m + n)&#x2F;2 - i] need to be satisfied, * * A [i], B [j] the left-hand side of the elements should be smaller than the right-hand side of the elements, which can be guaranteed that i and j are one of the median elements**.</li></ol><p>The graphical representation is as follows:</p><p><img src="https://s2.loli.net/2023/11/07/ZH5C7TeK3zfhYxt.webp"></p><p>1 two-point split:</p><p><img src="https://s2.loli.net/2023/11/07/lqZH839X1b7JfjK.webp"></p><p>2 two-point splits:</p><p><img src="https://s2.loli.net/2023/11/07/psIZbPR1iuvBkjX.webp"></p><p>After splitting again, left &#x3D; 3, right &#x3D; 3, and the dichotomous cut of array A is complete. At this point, the cut subscripts for arrays A and B are i &#x3D; left &#x3D; 3, j &#x3D; mid - left &#x3D; 2, respectively.</p><p><img src="https://s2.loli.net/2023/11/07/3Xx7IjZ1WMgAif5.webp"></p><p>The median result will only come from these <code>i, j, i-1, j-1</code> numbers. 1. when the total number of arrays is odd, the median is the larger of the left partition (i.e. i-1 and j-1):</p><ol><li>when the total number of arrays is odd, the median is the larger of the left partition (i.e., i-1 and j-1);</li><li>when the total number of arrays is even, the median is the average of the largest number in the left partition and the smallest number in the right partition.</li></ol><p>In addition, we have to consider boundary cases, such as when i, j is 0, the subscript i-1 or j-1 does not exist; when i, j is m, n, the subscript i or j does not exist. Since the title states that m and n will not be 0 at the same time, at least one of i and j must exist.</p><p>The Go code is as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">findMedianSortedArrays</span><span class="params">(nums1, nums2 []<span class="type">int</span>)</span></span> <span class="type">float64</span> &#123;</span><br><span class="line"></span><br><span class="line">   m, n := <span class="built_in">len</span>(nums1), <span class="built_in">len</span>(nums2)</span><br><span class="line">   <span class="keyword">if</span> m &gt; n &#123;</span><br><span class="line">      <span class="keyword">return</span> findMedianSortedArrays(nums2, nums1)</span><br><span class="line">   &#125;</span><br><span class="line">   left, right, mid := <span class="number">0</span>, m, (m+n+<span class="number">1</span>)/<span class="number">2</span></span><br><span class="line">   <span class="keyword">for</span> left &lt; right &#123;</span><br><span class="line">      i := (left + right + <span class="number">1</span>) / <span class="number">2</span></span><br><span class="line">      j := mid - i</span><br><span class="line"></span><br><span class="line">      <span class="keyword">if</span> nums1[i<span class="number">-1</span>] &gt; nums2[j] &#123;</span><br><span class="line">         right = i - <span class="number">1</span></span><br><span class="line">      &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">         left = i</span><br><span class="line">      &#125;</span><br><span class="line">   &#125;</span><br><span class="line">   i, j := left, mid-left</span><br><span class="line">   <span class="keyword">var</span> leftMax <span class="type">float64</span></span><br><span class="line">   <span class="keyword">if</span> i &gt; <span class="number">0</span> &amp;&amp; j &gt; <span class="number">0</span> &#123;</span><br><span class="line">      leftMax = <span class="type">float64</span>(max(nums1[i<span class="number">-1</span>], nums2[j<span class="number">-1</span>]))</span><br><span class="line">   &#125; <span class="keyword">else</span> <span class="keyword">if</span> i &gt; <span class="number">0</span> &#123;</span><br><span class="line">      leftMax = <span class="type">float64</span>(nums1[i<span class="number">-1</span>])</span><br><span class="line">   &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">      leftMax = <span class="type">float64</span>(nums2[j<span class="number">-1</span>])</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">if</span> (m+n)%<span class="number">2</span> == <span class="number">1</span> &#123;</span><br><span class="line">      <span class="keyword">return</span> leftMax</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">var</span> rightMin <span class="type">float64</span></span><br><span class="line">   <span class="keyword">if</span> i &lt; m &amp;&amp; j &lt; n &#123;</span><br><span class="line">      rightMin = <span class="type">float64</span>(min(nums1[i], nums2[j]))</span><br><span class="line">   &#125; <span class="keyword">else</span> <span class="keyword">if</span> i &lt; m &#123;</span><br><span class="line">      rightMin = <span class="type">float64</span>(nums1[i])</span><br><span class="line">   &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">      rightMin = <span class="type">float64</span>(nums2[j])</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">return</span> (rightMin+leftMax)/<span class="number">2</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">max</span><span class="params">(a, b <span class="type">int</span>)</span></span> <span class="type">int</span> &#123;</span><br><span class="line">    <span class="keyword">if</span> a&gt;b &#123;</span><br><span class="line">        <span class="keyword">return</span> a</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> b</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">min</span><span class="params">(a, b <span class="type">int</span>)</span></span> <span class="type">int</span> &#123;</span><br><span class="line">    <span class="keyword">if</span> areturn a</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> b</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Submission of results:</p><p><img src="https://s2.loli.net/2023/11/07/uRUK9QIeg1Dk6br.webp"></p><h2 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h2><p>A few of the classic questions in the Power Buckle Sorting category are above, and their probability of being encountered in interviews or written exams is very high. In Huawei’s machine test questions, the frequency is also very high, except for the last question, which is a bit less difficult, the rest of the questions basically appear at least once in 10 interviews.</p>]]></content>
    
    
    <summary type="html">How do you brush up on your power button? It is recommended to start with sorting questions, from sorting a chained table -&gt; the Kth largest element of an array, etc. This article takes you through the sorting high frequency interview questions of the big Internet companies.</summary>
    
    
    
    <category term="Algorithm" scheme="https://www.nablepart.com/categories/Algorithm/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="sorting algorithm" scheme="https://www.nablepart.com/tags/sorting-algorithm/"/>
    
    <category term="power button" scheme="https://www.nablepart.com/tags/power-button/"/>
    
    <category term="questions" scheme="https://www.nablepart.com/tags/questions/"/>
    
    <category term="array" scheme="https://www.nablepart.com/tags/array/"/>
    
    <category term="companies" scheme="https://www.nablepart.com/tags/companies/"/>
    
    <category term="frequency" scheme="https://www.nablepart.com/tags/frequency/"/>
    
  </entry>
  
  <entry>
    <title>Redis persistence doesn&#39;t make sense, so that&#39;s it for today.</title>
    <link href="https://www.nablepart.com/c908d1379458/"/>
    <id>https://www.nablepart.com/c908d1379458/</id>
    <published>2023-11-06T18:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Introduction</li><li>RDB</li><li>AOF</li><li>Comparison of Two Persistence Mechanisms</li><li>Summary</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>Q: What is Redis?</p><p>A: Redis, or Remote Dictionary Server, is an in-memory cache database written in C and widely used in Internet products.</p><p>Whether domestic or foreign, from the top 500 companies to small startups are using Redis, many cloud service providers also Redis-based caching services, message queuing services, and memory storage services, when you use these services, in fact, is to use Redis.</p><p>When you use these services, you’re actually using Redis.** As a developer, you’re bound to be asked about it in interviews, even if you don’t use it on the job! **</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/86d2084082dd4cf2a25a2a2acc2cd0ca%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>_Q: Why is Redis so important and what are some common application scenarios for it? _</p><p>A: The reason lies in the fact that Redis is a purely in-memory operation with high execution efficiency and supports rich data structures itself. It has a wide range of application scenarios, including but not limited to ** caching, event publish or subscribe, distributed locking ** and so on.</p><p>_Q: Are all Redis operations in-memory? _</p><p>A: No. The single-threadedness of redis means that it is a single-threaded operation when <strong>receiving client IO requests for reads and writes</strong>. But there are multi-threaded scenarios for redis itself, such as asynchronous deletion, persistence, and cluster synchronization.</p><p>Redis as a common knowledge point in the interview, the common eight-legged text must have been familiar with. However, in today’s increasingly voluminous Internet market, relevant knowledge points, can we answer the depth and breadth of the interviewer wants?</p><p>For example, today we want to review the knowledge point, Redis persistence mechanism.</p><p>_Q: What are the common Redis persistence mechanisms? _</p><p>A: RDB and AOF.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/658c3d8e5c174f7dac1ac0a7fbb1e852%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h2 id="2-RDB"><a href="#2-RDB" class="headerlink" title="2. RDB"></a>2. RDB</h2><h4 id="2-1-Introduction"><a href="#2-1-Introduction" class="headerlink" title="2.1 Introduction"></a>2.1 Introduction</h4><p>RDB (Redis Database Backup file), or snapshot mode, is the default data persistence method in Redis.</p><p>RDB is actually a <strong>timer event</strong> within Redis that checks the <strong>times</strong> and <strong>times</strong> of changes to the current data every once in a while to see if they meet the trigger conditions specified in the configuration file.</p><p>When the conditions are met, Redis creates a child process through the operating system call fork(), which ** shares the same address space, file system, and semaphores** as the parent process. After that, Redis traverses the entire memory space through the child process, copies the data set to a temporary file, and when the copy is complete, notifies the parent process to replace the original file with the new RDB file to complete the data persistence operation.</p><p>At the same time, during the persistence process, the parent process can still provide services to the outside world, and the parent and child processes realize the separation of data segments through the multi-process <strong>COW (copy and write) mechanism</strong> of the operating system, so as to ensure that the parent and child processes do not affect each other.</p><h4 id="2-2-Summary-of-advantages-and-disadvantages"><a href="#2-2-Summary-of-advantages-and-disadvantages" class="headerlink" title="2.2 Summary of advantages and disadvantages"></a>2.2 Summary of advantages and disadvantages</h4><p>During RDB persistence, the Redis fock child process saves all Redis data to a newly created dump.rdb file, which is a resource-consuming and time-wasting operation. This is a resource-consuming and time-consuming operation. Therefore, Redis servers should not create rdb files too often, or the performance of the server will be seriously affected.</p><p>In addition to this, the biggest shortcoming of RDB persistence is: <strong>There can be a lot of data loss during the last persistence process</strong>. We imagine a scenario, in the process of RDB persistence, Redis server suddenly down, then the child process may have generated rdb file, but the parent process has not yet had time to use it to cover the old rdb file, the buffer in the file has not been saved, will lead to a large number of data loss.</p><p>The advantage of RDB data persistence is that it restores very quickly, so it is more suitable for large-scale data recovery. If you are not particularly sensitive to the integrity of the data (allowing for the loss of data during the persistence process), then RDB persistence is very suitable.</p><h2 id="3-AOF"><a href="#3-AOF" class="headerlink" title="3. AOF"></a>3. AOF</h2><h4 id="3-1-Introduction"><a href="#3-1-Introduction" class="headerlink" title="3.1 Introduction"></a>3.1 Introduction</h4><p>AOF, append only log file, is also known as append mode, or log mode.</p><p>AOF logs all write commands executed by the server, <strong>and only those commands that modify memory</strong>, and Redis writes these commands to the appendonly.aof file at regular intervals.</p><p>We can re-execute the AOF file to restore the dataset when the server starts up, a process known as <strong>command replay</strong>.</p><h4 id="3-2-Three-Persistence-Mechanisms"><a href="#3-2-Three-Persistence-Mechanisms" class="headerlink" title="3.2 Three Persistence Mechanisms"></a>3.2 Three Persistence Mechanisms</h4><p>When Redis receives a modification command from a client, it will first perform the corresponding checksum, and if the command is error-free, it will immediately store the command in a buffer, and then append the buffer data to the .aof file at a certain rate.</p><p>In this way, even if there is an unexpected downtime, you only need to store the command in the aof file and perform a “command reenactment” to restore to the state before the downtime.</p><p>In the above execution process, there is a very important part of the <strong>command write, which is a disk IO operation</strong>: in order to improve the write efficiency, Redis does not write the content directly to disk, but first puts it into a memory buffer, and then only when the buffer is full or meets the persistence policy of the AOF does it actually write the content in the buffer to the disk (fsync operation). disk (fsync operation).</p><p>There are three persistence strategies (i.e., the frequency of fsync operations) for AOF:</p><ul><li>always: every time the server writes a command, it calls the fsync function once to write the command inside the buffer to the hard disk. In this mode, a server failure will not result in the loss of any command data that has been successfully executed, but its execution speed is very slow;</li><li>everysec (default): the server calls the fsync function once every second to write the commands in the buffer to the hard disk. In this mode, if the server fails, the command data executed within one second at most will be lost, and it is usually used as the AOF configuration policy;</li><li>no: the server does not call the fsync function, the operating system decides when to write the commands in the buffer to the hard disk. In this mode, when the server encounters unexpected downtime, the number of lost commands is uncertain, so this strategy, uncertainty is greater, and is not commonly used.</li></ul><p>Redis is still at risk of losing data if the data in the cache is not written to disk before experiencing downtime. The number of commands lost depends on when the commands are written to disk: ** The earlier the commands are written to disk, the less data will be lost in the event of an accident. **</p><p>Since is fsync is a disk IO operation, it is slow! If Redis has to fsync once (ALWAYS) to execute a command, it will severely impact Redis performance.</p><p>In production servers, Redis usually fsyncs every 1s or so by default (everysec) to maintain high performance and minimize data loss.</p><p>The last strategy (no), which lets the operating system decide when to synchronize data to disk, has many uncertainties and is not recommended.</p><blockquote><p>Note: The sync and fsync functions are two functions provided by the operating system to prevent data inconsistencies in caches and files caused by “delayed writes”.<br>The sync function puts the modified data into the cache write queue and returns without waiting for the end of the IO operation.<br>fsync, on the other hand, waits for the end of the IO operation before returning, and ensures that modified blocks are written to disk immediately to ensure that the file data is consistent with the cache.<br>i.e., the Linux fsync() function flushes the contents of a given file from the kernel cache to the hard disk synchronously, and sync() operates asynchronously.</p></blockquote><h4 id="3-3-Rewrite-Mechanism"><a href="#3-3-Rewrite-Mechanism" class="headerlink" title="3.3 Rewrite Mechanism"></a>3.3 Rewrite Mechanism</h4><p>Under the AOF persistence policy, Redis runs for a long time, and the aof file gets longer and longer. If the machine is down and restarted, the command to reenact the entire aof file will be very time-consuming, which will cause Redis to be unable to provide services to the public for a long time.</p><p>Therefore, in order to keep the size of the aof file within a reasonable range, Redis provides an AOF rewrite mechanism, that is, the aof file for “thinning”: the Redis server can create a new AOF file to replace the existing AOF file, ** the old and the new files save the same database state, the difference is that the new file does not have the task redundancy commands ** so the size of the file will be smaller than the old file, the old file will not have the task redundancy commands ** so the new file will be smaller than the old file. The difference is that the new file does not have task redundancy commands**, so the file size is much smaller than the old one.</p><p>Redis provides two ways to rewrite an AOF file: manually execute the BGREWRITEAOF command, or configure a policy to rewrite it automatically. AOF file rewriting is similar to the RDB persistence process, which involves forking a child process to manipulate the original AOF file.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e7f77e6cf4c840b88631aaf6106b5687%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As shown in the figure: the parent process continues to process new requests during the AOF rewrite process. If new commands are added, they are appended to the AOF rewrite buffer and then directly appended to the new AOF file.</p><h2 id="Comparison-of-AOF-and-RDB"><a href="#Comparison-of-AOF-and-RDB" class="headerlink" title="Comparison of AOF and RDB"></a>Comparison of AOF and RDB</h2><p>RDB persistence mechanism AOF persistence mechanism full backup, save the whole database incremental backup at a time, save only one command to modify the database at a time each time to execute the persistence operation of the interval is longer, the interval of saving is one second by default (everysec) data is saved in a binary format, its restore speed is faster using text format to restore data, so the data restore speed is generally execute the SAVE command will block. SAVE command will block the server, but manually or automatically triggered BGSAVE will not block the server AOF persistence will not block the server at any time.</p><p>Before Redis 4.0, we could only choose RDB or AOF as the persistence mechanism; after Redis 4.0, we can configure Redis persistence to be a hybrid mechanism, i.e., RDB+AOF are both used as persistence methods, for details, please refer to the redis.conf file:</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">rdbcompression <span class="built_in">yes</span></span><br><span class="line"></span><br><span class="line">rdbchecksum <span class="built_in">yes</span></span><br><span class="line"></span><br><span class="line">dbfilename dump.rdb</span><br><span class="line"></span><br><span class="line">appendfilename <span class="string">&quot;appendonly.aof&quot;</span></span><br><span class="line"></span><br><span class="line">appendfsync everysec</span><br><span class="line"></span><br><span class="line">auto-aof-rewrite-percentage 100</span><br><span class="line">auto-aof-rewrite-min-size 64mb</span><br><span class="line"></span><br><span class="line">aof-rewrite-incremental-fsync <span class="built_in">yes</span></span><br><span class="line"></span><br><span class="line">rdb-save-incremental-fsync <span class="built_in">yes</span></span><br></pre></td></tr></table></figure><blockquote><p>If for data recovery, there is both an RDB file and an AOF file, we should recover the data through the AOF file first, as this maximizes the security of the data.</p></blockquote><h2 id="5-Summary"><a href="#5-Summary" class="headerlink" title="5. Summary"></a>5. Summary</h2><p>2023 has come and gone, and the days of the Internet expanding like it did in previous years and grabbing a share of the application windfall are gradually passing! If the tide of the computer industry is receding for a short or long period of time, what will determine whether programmers will “dry swim” on the beach? Is it marginal business, or is it an aging crisis?</p><p>I do not think so, in fact, the rising tide and the ebbing tide is a trend, some people catch the crisis early, early to wear a good bathing suit to the shore; some people are not enough to the status quo, to the sea deeper swim. They are the wise man in the Internet wave, the wise man will not worry about the wave receding, because they have been prepared in advance. And the opportunity is always in favor of these early prepared people!</p><p><img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/9a2def93e9814100923de8f63b012c01~tplv-k3u1fbpfcp-zoom-in-crop-mark:1512:0:0:0.awebp"></p><p>Young people, let’s join hands and dogpile under the wave of the Internet together~</p>]]></content>
    
    
    <summary type="html">Whether domestic or foreign, from the Fortune 500 companies to small startups are using Redis, many cloud service providers also built on Redis as the basis of the corresponding caching services, message queuing services, and memory storage services, when you use these services, in fact, it is in the use of Redi</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="Interview" scheme="https://www.nablepart.com/tags/Interview/"/>
    
    <category term="Redis" scheme="https://www.nablepart.com/tags/Redis/"/>
    
    <category term="memory" scheme="https://www.nablepart.com/tags/memory/"/>
    
    <category term="message" scheme="https://www.nablepart.com/tags/message/"/>
    
    <category term="corresponding" scheme="https://www.nablepart.com/tags/corresponding/"/>
    
  </entry>
  
  <entry>
    <title>Messaging middleware, a powerful tool for coping with traffic spikes</title>
    <link href="https://www.nablepart.com/e0c5bb16f50a/"/>
    <id>https://www.nablepart.com/e0c5bb16f50a/</id>
    <published>2023-11-06T17:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h3 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h3><p>On the weekend, I drove to the beach with my friends. Those who have been to Yangmeikeng should know that you need to take a speedboat to go there from Yangmeikeng to Lukzui Mountain Resort.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/86a42f45d8c34a98bff5843acea4c24b%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Worthy of Shenzhen tour attractions on the top 5 places, 4 or 5 pm when the queue for the boat is still very many people, buy a good ticket we were called to a shore steps waiting to get on the boat, the scene is slightly chaotic.</p><p>**The crowds were a bit heavy, but there weren’t a lot of boats arriving to take passengers. **</p><p>Just as I was generally sweating for the staff maintaining order, I saw them go back and forth ordering several groups of people to get those people on the boat in an orderly fashion.</p><p>Not long after that, a thin, dark, middle-aged man came to call us, saying that a boat could hold 10 people, so he ordered the 10 people in front of us to stay put, and the 10 people who had been ordered could get on the boat.</p><p>Sure enough, software design comes from life, and this is the classic data consumption problem in system design!</p><h3 id="2-Message-middleware"><a href="#2-Message-middleware" class="headerlink" title="2. Message middleware"></a>2. Message middleware</h3><p>When the amount of data (passengers) is too much, the system (the speedboat carrying passengers) can not consume the data immediately, it will put the data into a <strong>consumption queue</strong> (the shore ladder) and wait for it, playing the role of a traffic peak shaving.</p><p>** In distributed systems, a major way to implement consumption queues is to use message middleware **.</p><h4 id="What-is-Message-Middleware"><a href="#What-is-Message-Middleware" class="headerlink" title="What is Message Middleware"></a>What is Message Middleware</h4><p>Message Broker is an infrastructure component for delivering messages, notifications and events in distributed systems.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e2439e84a7344d8d845b41f848ec1263%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>It allows data and information to be exchanged asynchronously between different components, applications, or systems to enable clipped, decoupled, and scalable communication.</p><p>The fundamentals of message middleware include the following key concepts:</p><ol><li><strong>Message Producer (Producer):</strong> This is the sender of the message, usually an application or component, which sends the message to the message middleware.</li><li><strong>Message Consumer:</strong> This is the receiver of the message, usually an application or component, which receives and processes the message from the message middleware.</li><li>**Message Queue: ** This is the core component of the message middleware, it is a queue to store messages, message producers will be put into the queue, message consumers from the queue to get the message. The message queue usually uses the first-in-first-out (FIFO) principle.</li><li><strong>Message Topic (Topic):</strong> In addition to message queues, message middleware also supports message topics, which allow publish-subscribe model of message communication. Message publishers publish messages to topics, while subscribers can subscribe to specific topics to receive related messages.</li></ol><p>Advantages of message middleware include:</p><ul><li><strong>Decoupling:</strong> Message middleware allows producers and consumers to operate independently; they do not need to be directly aware of each other’s existence. This decoupling makes the system more flexible and maintainable.</li><li><strong>Scalability:</strong> By increasing the capacity of the message middleware, it can easily handle more message traffic and consumers.</li><li>** Asynchronous Communication:** The message middleware allows asynchronous communication, where producers can continue to work without having to wait for a message to be processed, thus improving the performance and responsiveness of the system.</li><li><strong>Message Persistence:</strong> Messages are usually persisted so that they are not lost even if the message middleware or consumer fails.</li></ul><p>There are many different implementations and protocols for message middleware, and some of the popular ones include ActiveMQ, RocketMQ, RabbitMQ, Kafka, and others.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/d96af938b105410a8ed242aedda89319%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>They have different characteristics and advantages in different usage scenarios and requirements.</p><p>Message middleware is widely used in a variety of applications, including ** microservice architecture, big data processing, real-time data analysis, log collection, event-driven architecture ** and so on.</p><p>Next, we introduce common messaging middleware and their advantages, disadvantages and applicable scenarios to help you make an informed choice in application development.</p><h3 id="3-ActiveMQ"><a href="#3-ActiveMQ" class="headerlink" title="3. ActiveMQ"></a>3. ActiveMQ</h3><p><strong>Features:</strong></p><ul><li>ActiveMQ is a Java-based open source messaging middleware that implements the JMS (Java Message Service) specification.</li><li>Support for multiple messaging models , including peer-to-peer and publish-subscribe .</li><li>Provides high availability and load balancing , supports master-slave replication , can be used to build high-availability systems .</li><li>Suitable for Java applications, but there is some support for clients in other programming languages.</li></ul><p><strong>Benefits:</strong></p><ul><li>Easy to use for rapid development and prototyping.</li><li>Integrates with the Spring framework for easy integration with Spring applications.</li><li>Suitable for small to medium sized systems and intra-enterprise communications.</li></ul><p><strong>Disadvantages:</strong> * Relatively low performance.</p><ul><li>Relatively low performance, not suitable for high throughput and latency demanding scenarios.</li><li>Does not support large-scale message flow, not suitable for big data and real-time analysis applications.</li></ul><p>** Applicable Scenarios:** ActiveMQ is suitable for internal communications that require simple messaging and small to medium-sized systems. It performs well in intra-enterprise communication and lightweight applications, but is not suitable for high performance, high throughput and large-scale data processing.</p><p>In general, ActiveMQ domestic Internet companies landing less , mostly traditional enterprises in use.</p><h3 id="4-RocketMQ"><a href="#4-RocketMQ" class="headerlink" title="4. RocketMQ"></a>4. RocketMQ</h3><p><strong>Features:</strong> ** RocketMQ is an open source MQ framework from Alibaba.</p><ul><li>RocketMQ is Alibaba’s earlier open source MQ framework , based on the Java language written , and later donated to Apache , is a fast , reliable , scalable distributed messaging middleware .</li><li>Supports publish-subscribe and peer-to-peer messaging models.</li><li>With high performance and low latency , suitable for large-scale messaging .</li><li>Supports a rich set of client-side languages, including Java, C++, Python, Go, and so on.</li></ul><p>** Advantages:** * High performance and low latency, suitable for large-scale messaging.</p><ul><li>High performance and low latency, suitable for high-throughput large-scale applications.</li><li>Supports multiple messaging models, suitable for different business scenarios.</li><li>With powerful monitoring and management tools.</li></ul><p><strong>Disadvantages:</strong></p><ul><li>Deployment and configuration are relatively complex and require some specialized knowledge.</li><li>The community is relatively small , compared with some other messaging middleware , documentation and ecosystem is relatively immature .</li></ul><p>** Applicable Scenarios:** RocketMQ is suitable for large-scale applications that require high performance, low latency, and scalability, such as e-commerce platforms, financial systems, and Internet of Things applications.</p><h3 id="5-RabbitMQ"><a href="#5-RabbitMQ" class="headerlink" title="5. RabbitMQ"></a>5. RabbitMQ</h3><p><strong>Features:</strong></p><ul><li>RabbitMQ is an open source messaging middleware that implements the AMQP (Advanced Message Queuing Protocol) specification.</li><li>Supports a wide range of messaging models, including peer-to-peer, publish-subscribe, and RPC.</li><li>Provides reliability messaging , support for transactions and message acknowledgment .</li><li>Has multiple client libraries and supports multiple programming languages.</li></ul><p><strong>Benefits:</strong></p><ul><li>Mature technology with high stability, widely used in enterprise applications.</li><li>Provides high availability and load balancing mechanisms.</li><li>Supports multiple programming languages, suitable for cross-language applications.</li></ul><p><strong>Disadvantages:</strong></p><ul><li>Relatively low performance, not suitable for large-scale applications with high throughput.</li><li>Deployment and configuration complex, need some learning costs.</li><li>itself is erlang language development, source code is more difficult to analyze, need solid erlang language skills.</li></ul><p>** Applicable Scenarios: ** RabbitMQ is suitable for enterprise-level applications that require reliability and transaction support, but do not have particularly high performance requirements.</p><h3 id="6-Kafka"><a href="#6-Kafka" class="headerlink" title="6. Kafka"></a>6. Kafka</h3><p><strong>Features:</strong></p><ul><li>Kafka is a high-throughput, low-latency distributed messaging middleware for large-scale data processing and real-time stream processing.</li><li>Mainly used in the publish-subscribe model to store messages as logs.</li><li>Highly scalable and available , suitable for building large-scale real-time data streaming applications .</li><li>Support for a variety of clients, including Java, Python, Go and so on. ** Pros:** ** Cons:** ** Applicable Scenarios:** Kafka is suitable for applications that require high throughput, low latency, and large-scale data processing, such as log collection, real-time data analytics, and event-driven architecture.<ul><li></li></ul><ul><li>Complex to deploy and configure, requires specialized knowledge.</li><li>Not suitable for small-scale applications, high relative complexity.</li><li>High throughput and low latency for large-scale data processing and real-time stream processing.</li><li>High scalability and support for building large-scale data pipelines.</li><li>Data persistence and data replication to ensure data reliability.</li></ul></li></ul><h3 id="7-Technology-Selection"><a href="#7-Technology-Selection" class="headerlink" title="7. Technology Selection"></a>7. Technology Selection</h3><h4 id="RabbitMQ-and-Kafka"><a href="#RabbitMQ-and-Kafka" class="headerlink" title="RabbitMQ and Kafka"></a>RabbitMQ and Kafka</h4><p>RabbitMQ and Kafka are two of the most commonly used messaging middleware, the main differences between them are:</p><ul><li><strong>Performance:</strong> The performance of message middleware is mainly measured by throughput. Kafka’s stand-alone QPS can reach the million level, RabbitMQ’s stand-alone QPS is in the 10,000 level, and kafka is even higher;</li><li><strong>Data Reliability:</strong> kafka and rabbitMQ are equipped with multi-copy mechanism, data reliability is higher;</li><li>Consumption mode: Kafka is actively pulled by the client, and RabbitMQ supports two modes: active pull and server push. So RabbitMQ message real-time is higher, and for the consumer is more simple; and kafka can be pulled by the consumer according to their own situation, throughput is higher;</li><li>** idempotency:** kafka supports idempotency for a single producer, single partition, single session, while RabbitMQ does not support it;</li><li><strong>Other features:</strong> RabbitMQ supports priority queues, delayed queues, dead message queues (<strong>store queues of messages that can’t be consumed</strong>), and more.</li></ul><h4 id="How-to-choose-the-right-messaging-middleware"><a href="#How-to-choose-the-right-messaging-middleware" class="headerlink" title="How to choose the right messaging middleware"></a>How to choose the right messaging middleware</h4><p>In application development, choosing the right messaging middleware depends on specific requirements:</p><ul><li>If your application is a <strong>small to medium-sized system</strong> with low performance requirements, and is more concerned with ease of use and rapid development, then ActiveMQ may be a good choice.</li><li>If you need to handle <strong>large-scale messaging</strong> and are looking for high performance and low latency, then RocketMQ or Kafka may be a better fit, depending on your application type and requirements.</li><li>If your application is an <strong>enterprise application</strong> that requires reliability and transactional support, but does not require high performance, then RabbitMQ may be a good choice.</li><li>The final choice also depends on your technology stack, the experience of your team, and specific business requirements. It is recommended that you carefully evaluate your application requirements before choosing a messaging middleware and make your choice on a case-by-case basis.</li></ul><p>Of course, no matter which messaging middleware you choose, you need to have an in-depth understanding of its features and how to use it to ensure that it can meet the application requirements to build an efficient and reliable distributed system.</p><h3 id="8-Conclusion"><a href="#8-Conclusion" class="headerlink" title="8. Conclusion"></a>8. Conclusion</h3><p>Regardless of which message middleware is used, we can often see the wonderful use of consumption queues in our daily lives.</p><p>**With these buffering methods, our daily travel and consumption order can be well protected. **</p>]]></content>
    
    
    <summary type="html">When the amount of data (passengers) is too much, the system (the speedboat carrying passengers) can not immediately consume, the data will first be put into a consumption queue (shore step) to wait, play a role in traffic peak shaving. Inside the distributed system, a major way to realize the consumption queue is to use message middleware.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="data" scheme="https://www.nablepart.com/tags/data/"/>
    
    <category term="immediately" scheme="https://www.nablepart.com/tags/immediately/"/>
    
    <category term="consumptio" scheme="https://www.nablepart.com/tags/consumptio/"/>
    
    <category term="middleware" scheme="https://www.nablepart.com/tags/middleware/"/>
    
    <category term="major" scheme="https://www.nablepart.com/tags/major/"/>
    
  </entry>
  
  <entry>
    <title>Machine Learning from Data Labeling</title>
    <link href="https://www.nablepart.com/3e6bbfbe07a5/"/>
    <id>https://www.nablepart.com/3e6bbfbe07a5/</id>
    <published>2023-11-06T16:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Machine learning</li><li>Labeling systems</li><li>Number of system tasks</li><li>Flow of system tasks</li><li>Summary</li></ol><h2 id="1-Machine-Learning"><a href="#1-Machine-Learning" class="headerlink" title="1. Machine Learning"></a>1. Machine Learning</h2><h2 id="1-1-What-is-machine-learning"><a href="#1-1-What-is-machine-learning" class="headerlink" title="1.1 What is machine learning?"></a>1.1 What is machine learning?</h2><p>As the name suggests, the ability and insights of a computer program or system to learn by itself without direct human help or intervention, so that it can answer questions for humans. In layman’s terms, it is the process of creating an Artificial Intelligence (or AI for short) model that learns on its own like a human being, allowing for rapid advancement of knowledge in a particular area.</p><p>The difference is that the AI only needs a short period of self-learning to reach or even exceed the human level. For example, AlphaGo only learned Go for 9 days before it could beat a Korean 9-dan player with a 99% win rate, so it is undoubtedly a god of learning.</p><h3 id="1-2-Artificial-Intelligence-is-not-intelligent"><a href="#1-2-Artificial-Intelligence-is-not-intelligent" class="headerlink" title="1.2 Artificial Intelligence is not intelligent"></a>1.2 Artificial Intelligence is not intelligent</h3><p>Although AI’s learning ability is very strong, in the eyes of human beings, it is at the level of a god of learning.</p><p>What’s interesting is that most outsiders, including me, have been unaware for a long time of the fact that AI itself is not intelligent, e.g., machine learning, which often relies on low-paid crowdsourced workers to manually annotate and fine-tune the data so that they can continue to learn.</p><p>And I happen to be working on big model-related <strong>data labeling</strong> lately, so I’ll share with you some of the labeling-related points I’ve recently combed through.</p><h3 id="1-3-What-is-Data-Labeling"><a href="#1-3-What-is-Data-Labeling" class="headerlink" title="1.3 What is Data Labeling"></a>1.3 What is Data Labeling</h3><p>Data annotation is the process of attributing, labeling, or classifying data to help machine learning algorithms understand and categorize the information they need to process. This process is critical in model training for AI, allowing AI to accurately understand various data types such as images, audio files, video clips, or text.</p><p>The current <strong>decision-making AI</strong> (e.g., Shake or Taobao’s recommendation system, Tesla’s Smart Driving, etc.), <strong>generative AI</strong> (e.g., OpenAI’s ChatGPT, Baidu’s Wenxin Yiyin, etc.) can’t be separated from the annotation step, and all of them currently rely on a large number of annotators, or are screened by human beings after being annotated by machines.</p><p>Here to share a text annotation GitHub annotation project.</p><h2 id="2-The-annotation-system"><a href="#2-The-annotation-system" class="headerlink" title="2. The annotation system"></a>2. The annotation system</h2><p>This annotation system is called Open Assistant, a project that aims to make chat-based language macromodeling accessible to everyone.</p><p>This project has been open sourced on GitHub, ⭐️ has broken 32k in just a few months, for those interested: [<a href="https://github.com/LAION-AI/Open-Assistant">github.com&#x2F;LAION-AI&#x2F;Op…</a>] </p><p>On this annotation system, several processes of machine learning can be simulated, including conversation tree expansion, session annotation and final scoring, and then filtering out the conversations with the highest scores to continue learning.</p><h3 id="2-1-Conversation-tree"><a href="#2-1-Conversation-tree" class="headerlink" title="2.1 Conversation tree"></a>2.1 Conversation tree</h3><p>In the annotation system, the conversation tree is the most basic data structure, which simulates the conversation process of machine learning, and its structure is as follows:</p><p><img src="https://s2.loli.net/2023/11/07/yw59rXsdUYfhpZq.webp"></p><p>First, the root node of the tree is the <strong>Initial Instruction</strong>, which is the first sentence given by a user (Prompt), which may be a prompt, or may throw out a question.</p><p>Second, then <strong>Chatbot gives a reply</strong>, chatbot based on different processing dimensions, including their own level of knowledge, summarization ability, etc., will give a reply to the user’s question (Assistant Response).</p><p>Third, when the chatbot gives a reply, ** the user continues to ask ** (Prompt Response), at this time the dialog tree has come and gone.</p><p>Then the <strong>chatbot gives a response</strong> (Assistant Response), continuing to expand the dialog tree.</p><p>When a node in the tree, or the user, no longer responds, the tree ends. However, these chatbots will continue to learn from all the generated dialog trees, i.e., all the conversations with all the users, in order to become smarter and give better answers during the conversations.</p><h3 id="2-2-Tasks"><a href="#2-2-Tasks" class="headerlink" title="2.2 Tasks"></a>2.2 Tasks</h3><p>In addition to the dialog tree, Task is also a very important data structure in the annotation system. The update of the dialog tree and its nodes is accomplished through one different task. Tasks in the annotation system are of the following types:</p><ul><li>Initialization prompt (initial_prompt) task: the system requires the user’s identity to create an initial instruction, this initial instruction may be to seek an explanation of the concept, a problem of algebraic operations, or the creation of the article type of requirements, that is, ** the user asked a question; **<br>label_initial_prompt: let the labeler to label the initial instruction, when more than one person labeling is completed, start by the chatbot to reply to the initial prompt, that is, ** let the labeler to screen out unhealthy questions, such as illegal and criminal, politically sensitive questions; ** ** the chatbot reply to the initial prompt, that is, ** the user asked a question; ** the user asked a question; ** the user asked a question; ** the user asked a question; ** the user asked a question; ** the user asked a question<br>** Chatbot reply (assistant_reply): replying to the initial instruction with a trained chatbot can be seen as the first question answer, at the second level of the dialog tree. ** This step is used by different AIs to reply to the user’s question according to their knowledge level; **</li><li>Labeling the chatbot’s reply (label_assistant_reply): the content of the chatbot’s reply to the user’s instructions&#x2F;replies is labeled, and the labeled qualified replies are ready to proceed to the next round of the conversation or to the sorting task (enough people are labeled qualified). ** labeling personnel to screen and score the AI’s responses, and ultimately select the AI with the best responses to continue training (can be interpreted as raising compulsion, eliminating unqualified ones); **</li><li>Reply as a user (prompter_reply): reply to the chatbot’s message as a user, corresponding to the dialog tree will add a node, that is, ** by the user to continue to ask questions **;</li><li>Label the user’s reply (label_prompter_reply): label the content of the user’s reply to the chatbot, and the next round of dialog can be carried out after the labeling passes, ** label the user’s questions, and screen out unhealthy and meaningless information **;<br>** Sorting users’ replies (rank_prompter_replies): when many users have replied, each user’s annotation result is qualified, at this time ** the annotator sorts the text quality of users’ replies and selects some optimal replies for the bot to learn; **</li><li>Ranking the replies of chatbots (rank_assistant_replies): similarly, through ** sorting scoring, the optimal replies among chatbots are selected for the models of other bots to proceed to learning and training; **<br>** Random tasks: the system randomly selects more than one task for the annotators to complete.</li></ul><h2 id="3-The-number-of-system-tasks"><a href="#3-The-number-of-system-tasks" class="headerlink" title="3. The number of system tasks"></a>3. The number of system tasks</h2><p>The state flow of the conversation tree and its node expansion is accomplished by one different task. Since the number of session trees in different states is inconsistent, the number of different types of tasks displayed on the page is also inconsistent.</p><p><img src="https://s2.loli.net/2023/11/07/OTzrZUJyM1fgop5.webp"></p><h3 id="1-Number-of-random-tasks-random"><a href="#1-Number-of-random-tasks-random" class="headerlink" title="1) Number of random tasks (random)"></a>1) Number of random tasks (random)</h3><p>The number of random tasks (shown on the page as “I feel lucky”, this is a system bug, don’t worry) is the total number of all task types, when you choose to do a random task, the system will randomly select a session tree and execute the corresponding task.</p><h3 id="2-Number-of-tasks-to-write-commands-as-a-user-initial-prompt"><a href="#2-Number-of-tasks-to-write-commands-as-a-user-initial-prompt" class="headerlink" title="2) Number of tasks to write commands as a user (initial_prompt)"></a>2) Number of tasks to write commands as a user (initial_prompt)</h3><p>In order to minimize write conflicts when updating data tables, the system will limit the number of active conversation trees, and the number of conversation trees that can currently remain active is the number of tasks to write instructions as a user.</p><p>This number is controlled by a combination of two indicators:</p><ul><li>Metric 1 is the number of dialog trees remaining to be created, plus the number of dialog trees that have already been created and can continue to grow;</li><li>Metric 2 is the number of dialog trees currently available for queuing;</li></ul><p>Finally, the number of tasks writing instructions as a user takes the smaller of the two metrics.</p><p>** In simple terms, this number is the number of questions the user can and does ask. **</p><h3 id="3-Number-of-tasks-labeled-user-initial-instructions-label-initial-prompt"><a href="#3-Number-of-tasks-labeled-user-initial-instructions-label-initial-prompt" class="headerlink" title="3) Number of tasks labeled user initial instructions (label_initial_prompt)"></a>3) Number of tasks labeled user initial instructions (label_initial_prompt)</h3><p>The dialog tree needs to be labeled with the user’s initial instructions after they have been written by the user. Therefore, the number of tasks for labeling the user’s initial instructions is the number of dialog trees that are currently active, have a status of initial waiting to be labeled (initial_prompt_review), and have a user’s number of labels that is lower than the configured value.</p><p>When multiple users have completed labeling a dialog tree and the labeling results are satisfactory. The state of the dialog tree changes to growing, and the number of tasks labeled with the user’s initial instruction is reduced by one.</p><p>** In a nutshell, this number is the total number of initial questions that can be annotated by the annotator. ** ***</p><h3 id="4-Number-of-tasks-to-reply-as-a-chatbot-assistant-reply"><a href="#4-Number-of-tasks-to-reply-as-a-chatbot-assistant-reply" class="headerlink" title="4) Number of tasks to reply as a chatbot (assistant_reply)"></a>4) Number of tasks to reply as a chatbot (assistant_reply)</h3><p>When <strong>3) has finished annotating the user’s initial instructions</strong> and needs to reply to the session tree, it will first reply as an assistant. Therefore, the number of tasks replied to as a chatbot is the number of conversation trees that are currently active, growing, and created with the role of prompter.</p><p>**This is the total number of labeled questions that the AI can reply to (of course, in the system, a human user can also simulate the AI). ** ** This is the total number of questions that the AI can reply to.</p><h3 id="5-Number-of-tasks-labeled-for-chatbot-replies-label-assistant-reply"><a href="#5-Number-of-tasks-labeled-for-chatbot-replies-label-assistant-reply" class="headerlink" title="5) Number of tasks labeled for chatbot replies (label_assistant_reply)"></a>5) Number of tasks labeled for chatbot replies (label_assistant_reply)</h3><p>Once <strong>4) replies to the conversation tree</strong> as a chatbot, these replies need to be labeled. Therefore, the number of tasks to label chatbot replies is the number of conversation trees that are currently active, with a conversation tree state that is growing (GROWING) and below the configured value. These conversation tree leaf nodes are created with the role of chatbot (assistant).</p><p>** In a nutshell, this quantity is the total number of annotated AI replies that are available to the annotator. **</p><h3 id="6-Number-of-tasks-replying-as-a-user-prompter-reply"><a href="#6-Number-of-tasks-replying-as-a-user-prompter-reply" class="headerlink" title="6) Number of tasks replying as a user (prompter_reply)"></a>6) Number of tasks replying as a user (prompter_reply)</h3><p>After <strong>5) has finished labeling the chatbot’s replies</strong>, if the labeling result is qualified, it needs to continue to reply to the session tree, and here it will reply as a user.</p><p>Therefore, the number of tasks replying as a user is the number of active, growing conversation trees whose leaf nodes were created with the role of chatbot (assistant).</p><h3 id="7-Number-of-tasks-to-label-the-user’s-replies-label-prompter-reply"><a href="#7-Number-of-tasks-to-label-the-user’s-replies-label-prompter-reply" class="headerlink" title="7) Number of tasks to label the user’s replies (label_prompter_reply)"></a>7) Number of tasks to label the user’s replies (label_prompter_reply)</h3><p>After <strong>6) replies to the conversation tree</strong> as a user, these replies need to continue to be labeled.</p><p>Therefore, the number of tasks labeled user replies is the number of conversation trees that are currently active, with a conversation tree state that is growing (growing) and below the configured value, and for which the creation role of the leaf node of the conversation tree is the user (prompter).</p><h3 id="8-Number-of-tasks-to-rank-chatbot-user-replies-rank-assistant-replies-rank-prompter-replies"><a href="#8-Number-of-tasks-to-rank-chatbot-user-replies-rank-assistant-replies-rank-prompter-replies" class="headerlink" title="8) Number of tasks to rank chatbot&#x2F;user replies (rank_assistant_replies&#x2F;rank_prompter_replies)"></a>8) Number of tasks to rank chatbot&#x2F;user replies (rank_assistant_replies&#x2F;rank_prompter_replies)</h3><p>When the annotation is completed, if the number of nodes (both root and leaf nodes) in the conversation tree reaches the target value, the status of the conversation tree is set to ranking.</p><p>Therefore, the number of tasks to be ranked is the number of conversation trees that are currently active and have the status of being ranked. There are two types of ranking tasks:</p><ol><li>if the leaf node of the dialog tree is a user reply, then the user’s reply is ranked;</li><li>sorting the replies of the chatbot if the leaf node of the dialog tree is a chatbot reply.</li></ol><h2 id="4-Labeling-system-task-flow"><a href="#4-Labeling-system-task-flow" class="headerlink" title="4. Labeling system task flow"></a>4. Labeling system task flow</h2><h2 id="1-Writing-commands-as-a-user"><a href="#1-Writing-commands-as-a-user" class="headerlink" title="1) Writing commands as a user"></a>1) Writing commands as a user</h2><p>Before writing a command as a user, some basic checks are done, such as user authentication, whether the number of queued tasks for the user in the recent period exceeds the limit value, and whether the remaining number of creation of the dialog tree is sufficient.</p><p>If one of the basic checks fails, the process is terminated; otherwise, the initial instruction writing starts.</p><p>When the user completes the task of writing the initial instruction, the details of the initial instruction (e.g., text, user name, role-prompter, time, etc.) are stored in the database as a new dialog message and a new dialog tree is generated, which is now in the state of initialization prompt (initial_prompt_review).</p><h2 id="2-Labeling-initial-user-commands"><a href="#2-Labeling-initial-user-commands" class="headerlink" title="2) Labeling initial user commands"></a>2) Labeling initial user commands</h2><p>After the dialog tree is created, the user’s initial instructions need to be labeled.</p><p>Before labeling, it is necessary to do some basic checks, such as user authentication, whether the number of queued tasks of the user in the recent period exceeds the limit, and whether the number of remaining tasks for labeling the user’s initial instruction is sufficient.</p><p>If one of the basic checks does not pass, then report an error and return; otherwise, start labeling the user’s initial instructions.</p><p>Then from inside the conversation tree <strong>randomly select</strong> a conversation tree that needs to be labeled with initial instructions (prompts_need_review), and label the sentences of the initial prompts for scoring.</p><p>The dimensions for labeling are commonly:</p><ul><li>Whether it is spam;</li><li>Not Chinese, inappropriate, contains PII, hate speech, or contains sexual content;</li></ul><p>Scoring dimensions are:</p><ul><li>high quality, level of creativity, level of humor, level of politeness, level of violence</li></ul><p>When the annotation is submitted, the system will save the annotation results of each user, and store the annotation results in different tables according to the specifics of the annotation [e.g., whether to report or not, whether to like or not, etc.].</p><p>Next, it will determine whether the number of times the dialog tree has labeled the user’s initial instruction reaches the configured value, and if a certain number of labeling times has been reached, it will start to rate the sub-node of initial instruction.</p><p>Note: the current scoring is only based on whether it is spam or not, and the language is not clear (not Chinese) to calculate the score.</p><blockquote><p>Score calculation method: suppose the initial instruction of the dialog tree has been labeled by 3 users, 1 of them labeled it as spam, and the other 2 users labeled it as not spam, the ratio of labeling as spam is 1&#x2F;3. Similarly, if 1 user labeled the initial instruction as language-unintelligible, and the other 2 users didn’t label it as language-unintelligible, the ratio of labeling as language-unintelligible is 1&#x2F;3. The total score of labeling is 0.34 (1&#x2F;1), the total score of labeling is 0.34 (1&#x2F;1), the total score of labeling is 0.34 (1&#x2F;1). The total marking score is 0.34 (1-1&#x2F;3-1&#x2F;3).</p></blockquote><p>In the end, when the labeling score exceeds the configured value (default 0.6), the labeling result of the root node is set as qualified and stored in the table to start the next step, otherwise, it is set as unqualified, and the dialog tree is set to the state of aborted_low_grade, and will not be continued in the subsequent process.</p><h3 id="3-Replying-as-a-chatbot"><a href="#3-Replying-as-a-chatbot" class="headerlink" title="3) Replying as a chatbot"></a>3) Replying as a chatbot</h3><p>After the initial command is labeled, the dialog tree turns into a growing state, at which time the dialog tree can be replied by the chatbot.</p><p>Before replying to the initial command as a chatbot, some basic checks are done, such as user authentication, whether the number of queued tasks of the user in the recent period exceeds the limit, and whether the remaining number of replies in the dialog tree is sufficient.</p><p>If one of the basic checks fails, the process is terminated; otherwise, a conversation tree that has been labeled initial instruction completion and is in a growing state is randomly selected for replying.</p><p>When the chatbot replies to a task, the details of the reply (e.g., text, username, role-assistant, time, etc.) are stored in the database table as a new dialog message, and the current chatbot’s reply task is updated to done.</p><h3 id="4-Labeling-chatbot-replies"><a href="#4-Labeling-chatbot-replies" class="headerlink" title="4) Labeling chatbot replies"></a>4) Labeling chatbot replies</h3><p>After the chatbot replies to the initial command, it needs to annotate the reply.</p><p>Before labeling, some basic checks should be done, such as user authentication, whether the number of queued tasks for the user in the recent period exceeds the limit, and whether the number of remaining tasks for labeling the chatbot’s replies is sufficient.</p><p>If one of the basic checks is not passed, an error is reported and returned; otherwise, from inside the conversation tree <strong>randomly select</strong> a conversation tree that needs to be marked for chatbot replies (reply_need_review), and start marking chatbot replies.</p><p>In addition to the same marking dimensions as the marking initial instruction, the marking dimensions for marking chatbot replies are increased:</p><ul><li>As a response to the prompt task, is it a bad response?</li></ul><p>Scoring dimensions added:</p><ul><li>How helpful was it?</li></ul><p>When the annotation is submitted, the system saves the annotation results for each user and stores the annotation results in a different table according to the specifics of the annotation [e.g., whether it is reported or not, whether it is liked or not, etc.].</p><p>Next, it determines whether the number of times the conversation tree has labeled the user’s initial instruction has reached the configured value, and if a certain number of labeling times has been reached, it starts to rate the sub-node of bot replies.</p><p>Note: Like labeling the initial instruction, the current scoring is only based on whether it is spam or not, and the language is not clear (not Chinese).</p><p>Eventually, when the labeling score exceeds the configured value (default 0.6), the labeling result of the root node is set as qualified and stored in the table to start the next step, otherwise, it is set as unqualified, and the conversation tree is set as aborted_low_grade, and will not continue to go through the subsequent process.</p><p>When the labeling is completed, determine whether the number of nodes in the current conversation tree has reached a certain number, and if the target value is reached, the state of the conversation tree will be set to the sorting (ranking) state.</p><h2 id="5-Replying-as-a-user"><a href="#5-Replying-as-a-user" class="headerlink" title="5) Replying as a user"></a>5) Replying as a user</h2><p>After the bot replies have been labeled, the dialog tree continues to grow, and then replies as user are used to expand the child nodes.</p><p>Before expanding the tree as a user, some basic checks are done, such as user authentication, whether the number of queued tasks of the user in the recent period exceeds the limit, and whether the remaining number of replies in the tree is sufficient.</p><p>If one of the basic checks fails, the process is terminated; otherwise, a bot replying to a conversation tree that has been labeled complete and is in a growing state will be randomly selected for replying.</p><p>When the chatbot replies to a task, it stores the details of the reply (e.g., text, username, role-assistant, time, etc.) as a new dialog message in the database table and updates the current user’s reply task as done.</p><h3 id="6-Labeling-user-replies"><a href="#6-Labeling-user-replies" class="headerlink" title="6) Labeling user replies"></a>6) Labeling user replies</h3><p>After the user has replied to the dialog tree, it is necessary to continue to label the user’s replies.</p><p>Before labeling, it is also necessary to do some basic checks, such as user authentication, whether the number of queued tasks of the user in the recent period exceeds the limit, and whether the number of remaining tasks for labeling the user’s replies is sufficient, etc. If one of the basic checks is not passed, an error is reported.</p><p>If one of the basic checks does not pass, an error is reported; otherwise, from the conversation tree <strong>randomly select</strong> a conversation tree that needs to be labeled with the user’s reply (reply_need_review), and start labeling the user’s reply text.</p><p>As with the labeling initial command, the marking dimensions for labeling user replies are:</p><ul><li>Whether it is spam or not;</li><li>is not Chinese, inappropriate, contains PII, hate speech, or contains sexual content;</li></ul><p>The scoring dimensions are:</p><ul><li>quality, creativity, humor, politeness, violence.</li></ul><p>When the annotation is submitted, the system will save the annotation results of each user, and according to the specifics of the annotation [e.g., whether to report or not, whether to like, etc.] the annotation results will be stored in different tables.</p><p>Next, determine whether the number of times the dialog tree annotated user replies reaches the configured value, and if a certain number of annotations has been reached, start scoring the sub-node of user replies.</p><p>Note: Like the annotation initialization command, the current scoring only calculates the score based on whether it is spam or not, and the language is not clear (not Chinese).</p><p>Eventually, when the labeling score exceeds the configured value (default 0.6), the labeling result of the root node is set to qualified and stored in the table to start the next step, otherwise it is set to unqualified, and the conversation tree is set to aborted_low_grade, and no longer continue to go through the subsequent process.</p><p>When the labeling is completed, determine whether the number of nodes in the current conversation tree has reached a certain number, if it reaches the target value, then the state of the conversation tree will be set to the ranking state; otherwise, the conversation tree will continue to grow, and the cycle of steps 3) to 6).</p><h3 id="7-Ranking-chatbot-replies"><a href="#7-Ranking-chatbot-replies" class="headerlink" title="7) Ranking chatbot replies"></a>7) Ranking chatbot replies</h3><p>After completing the task of <strong>4) labeling the chatbot’s replies</strong>, we need to sort the chatbot’s replies if the conversation tree enters the sorting state.</p><p>Before sorting, some basic checks will be done, such as user authentication, whether the number of queued tasks for the user in the recent period exceeds the limit value, and whether the number of remaining tasks for sorting chatbot replies is sufficient.</p><p>If one of the basic checks fails, an error is reported and returned; otherwise, a conversation tree waiting for chatbot replies to sort (incomplete_rankings) is <strong>randomly selected</strong> from inside the conversation tree to start sorting tasks.</p><p>Sorting involves ranking the correctness and validity of multiple replies from the chatbot.</p><p>When the sorting results are submitted, the system saves the sorting results for each user and records the sorting operation in the log table by adding 1 to the number of sorted messages for the sorted message node.</p><p>When the number of sorted chatbot replies for a conversation tree node exceeds the configured value, scoring ranking starts for the sorting of that conversation tree, and the state of the conversation tree at this time is the ready_for_scoring state (ready_for_scoring).</p><p>After sorting by the algorithm, the ranking of chatbot replies of the sub-nodes of the dialog tree is calculated, and if the scoring process is completed successfully, the dialog tree is changed to the ready_for_export state; otherwise, if there is an error in the scoring process, the dialog tree will be set to the scoring_failed state.</p><h2 id="5-Summary"><a href="#5-Summary" class="headerlink" title="5. Summary"></a>5. Summary</h2><p>In addition to text annotation, we also come across image, audio, and video annotation during AI model training, which is done by a large number of annotation employees. However, as AI and GPT grow well, and large models bloom both at home and abroad, the labeling may be left to robots in the future, and the labeling will slowly become smarter and more automated.</p><p>According to market research, the current application of AI in the entertainment media field is dominated by <strong>content distribution</strong>, and there are some auxiliary applications in the <strong>content production stage</strong>, and in the later stage, it will move towards <strong>massive auxiliary</strong> content creation or even <strong>massive replacement of human creation</strong>.</p><ul><li>Machine-assisted human stage: generative AI greatly reduces the cost and threshold of content production, reduces costs and increases efficiency for content companies, and existing large model companies are expected to obtain higher profits;</li><li>Machine “replacement” stage: users only need to input instructions to get the required content created by AI, the importance of the content distribution link decreases, the existing Internet entertainment giants face the challenge of “accurately providing content that meets user needs” to “providing content production tools that meet user needs”.</li></ul><p>To put it in human terms, now these AIs are tools for us to keep our rice bowls, but in the future, they may be tools for us to steal our rice bowls!</p>]]></content>
    
    
    <summary type="html">What is data labeling? What does it have to do with machine learning, and what does it have to do with the big models like GPT that are so hot these days? This article will give you a real sense of how AI models are learned, from introduction to practice.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Artificial Intelligence" scheme="https://www.nablepart.com/tags/Artificial-Intelligence/"/>
    
    <category term="Machine Learning" scheme="https://www.nablepart.com/tags/Machine-Learning/"/>
    
    <category term="data labeling" scheme="https://www.nablepart.com/tags/data-labeling/"/>
    
    <category term="GPT" scheme="https://www.nablepart.com/tags/GPT/"/>
    
    <category term="introduction" scheme="https://www.nablepart.com/tags/introduction/"/>
    
  </entry>
  
  <entry>
    <title>Let&#39;s Go</title>
    <link href="https://www.nablepart.com/ec45cd400545/"/>
    <id>https://www.nablepart.com/ec45cd400545/</id>
    <published>2023-11-06T15:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h3 id="The-past-and-present-of-the-Go-language"><a href="#The-past-and-present-of-the-Go-language" class="headerlink" title="The past and present of the Go language"></a>The past and present of the Go language</h3><blockquote><p>Hello, everyone, thank you for coming, I am the main character today - Go!</p></blockquote><h5 id="The-Birth-of-Go"><a href="#The-Birth-of-Go" class="headerlink" title="The Birth of Go"></a>The Birth of Go</h5><p>In September, 2007, in Google Labs, 61-year-old Rob Pike, 74-year-old Ken Thompson, and 74-year-old Rob Pike were working on the Go language. Rob Pike (Rob Pike), 61, Ken Thompson (Ken Thompson), 74, and 64-year-old Rob Thompson (Ken Thompson) in Google Labs. Ken Thompson, 74, and Robert Griesemer, 64. Ken Thompson, 74, and Robert Griesemer, 64, three engineers who got together and gave me the simple name of Go.</p><p>As they were both parents and had a son at an older age, I was always “loved by all”. And because all three of them are very powerful, there is Ken, the father of B, C, and Unix, who won the Turing Award and the U.S. National Technology Award, Rob, the creator of UTF-8 character encoding, and the main contributor to Google V8 and HotSpot JVM. As a result, I’m also known to many as “Bull II”.</p><p>November 10, 2009 is my birthday, my previous works were hosted in Google’s codehub, and then more people used it, so the code was slowly migrated to GitHub. It’s also slowly catching on in China, so let’s give you a look at the scope of business I’m involved in:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2023-11-07_223323.png"></p><blockquote><p>In addition to large-scale projects such as Docker containers and kubernetes (known in the industry as k8s), which are implemented entirely in Go, more and more major Internet vendors are converting to Go. Today, the Go language has achieved a crucial position in the field of cloud computing and in the development of microservices for many large projects.</p></blockquote><h5 id="Why-Go-language-is-popular-all-over-the-world"><a href="#Why-Go-language-is-popular-all-over-the-world" class="headerlink" title="Why Go language is popular all over the world"></a>Why Go language is popular all over the world</h5><ul><li>First, I’m a compiled language; I can execute the same program 30% to 70% faster than an interpreted language like Java;</li><li>Secondly, as the C language of the 21st century, my development speed is not far behind, and my concise and efficient syntax leaves C and C++ developers in the dust;</li><li>Concurrency support is friendly, and the advantage of being able to achieve concurrency in one line of code will make me more and more popular in high concurrency scenarios;</li><li>Finally, the arrival of the cloud computing era and the massive demand for Internet servers will allow me to make a big splash in the future.</li></ul><blockquote><p>Don’t ask what the big picture is, ask is to surpass Java, python and other languages, and be among the top of the popularity.</p></blockquote><h5 id="Go-object-oriented"><a href="#Go-object-oriented" class="headerlink" title="Go, object-oriented?"></a>Go, object-oriented?</h5><p>Many of my partners who are new to Go are puzzled by the fact that I support encapsulation, but not inheritance or polymorphism, so strictly speaking I am not an object-oriented language. But I support interfaces to apply any kind of data type, and allow object-oriented programming style, from this level I am also a kind of object-oriented language.</p><p>So, am I an object-oriented language or not? Let’s take a look at how the official documentation defines me:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/342e1feb277f421c8e166978c0cb8675%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><blockquote><p>Answer: Yes and no. On the one hand Go has an object-oriented programming style, on the other hand it doesn’t have object inheritance, so Go is both an object-oriented language and not quite.<br>And it’s the lack of object inheritance that makes Go lighter and simpler than C++&#x2F;Java. Think about base classes, derived classes, single inheritance, multiple inheritance, permission control, etc.</p></blockquote><h3 id="Quick-start"><a href="#Quick-start" class="headerlink" title="Quick start"></a>Quick start</h3><h4 id="1-hello-world-First-Go-program"><a href="#1-hello-world-First-Go-program" class="headerlink" title="1. hello, world First Go program"></a>1. hello, world First Go program</h4><p>Whether you’re following me into the world of the Go language based on a job requirement, a desire to learn a highly promising development language, or your first foray into computers and programming.</p><p>After hearing so many stories about the king’s wife, I’m sure you’re familiar with this pushy routine. So, without further ado. Life is short, Let’s Go !</p><blockquote><p>The first program in the life of the vast majority of computer professionals:</p></blockquote><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">package</span> main</span><br><span class="line"><span class="keyword">import</span> <span class="string">&quot;fmt&quot;</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    fmt.Println(<span class="string">&quot;hello world!&quot;</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>As in many languages, the main function is the first function to be executed, but the main package is special in Go and is used to define a separate executable program. Simply put, the main package must be introduced wherever there is a main function and you need to run that main function:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br></pre></td></tr></table></figure><p>The fmt package contains a number of standard output and input functions, such as the print function Print, Println in the program means that immediately after printing a new line.</p><blockquote><p>Don’t know how to run a Go program yet? Check out this article on installing and configuring Go to run</p></blockquote><p>The Go language provides two common ways to run programs:* Build a .exe file with the go build command, and then run</p><blockquote><p>go build hello.go This creates an .exe compiled file in the directory, which can be opened on a Windows machine, or executed from the command line or git bash.<br>&#x2F;hello.exe &#x2F;hello.exe</p></blockquote><ul><li>Run the file directly with the go run command</li></ul><blockquote><p>go run hello.go</p></blockquote><p>The result</p><blockquote><p>hello world!</p></blockquote><h4 id="2-Find-duplicate-lines-in-a-list-of-strings"><a href="#2-Find-duplicate-lines-in-a-list-of-strings" class="headerlink" title="2. Find duplicate lines in a list of strings."></a>2. Find duplicate lines in a list of strings.</h4><blockquote><p>This program will teach you about for loops in Go, and the definition and use of string slices (slices: understood as variable length arrays):</p></blockquote><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"><span class="keyword">import</span> <span class="string">&quot;fmt&quot;</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    arr := []<span class="type">string</span>&#123;<span class="string">&quot;qq&quot;</span>, <span class="string">&quot;wechat&quot;</span>, <span class="string">&quot;dnf&quot;</span>, <span class="string">&quot;lol&quot;</span>, <span class="string">&quot;wechat&quot;</span>&#125;</span><br><span class="line">    counts := <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">string</span>]<span class="type">int</span>)</span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i &lt; <span class="built_in">len</span>(arr); i++ &#123;</span><br><span class="line">    counts[arr[i]]++</span><br><span class="line">&#125;</span><br><span class="line">    <span class="keyword">for</span> key, value := <span class="keyword">range</span> counts &#123;</span><br><span class="line">    <span class="keyword">if</span> n &gt; <span class="number">1</span> &#123;</span><br><span class="line">    fmt.Printf(<span class="string">&quot;值为 %s 的字符串重复了%d次 \n&quot;</span>, key, value)</span><br><span class="line">    &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Define a string slice: arr :&#x3D; []string{}, with the variable name on the left, and :&#x3D; is a common assignment statement in Go, e.g.: i :&#x3D; 0, which is the equivalent of initializing the i variable before assigning it:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="selector-tag">var</span> <span class="selector-tag">i</span> int <span class="selector-tag">i</span> = <span class="number">0</span></span><br></pre></td></tr></table></figure><p>In my Go world, defining a map is usually done by initializing the map with make to prevent null pointer exceptions. Or you can define a map with initialized key-value pair elements when you create it:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">counts := <span class="keyword">map</span>[<span class="type">string</span>]<span class="type">int</span> &#123;</span><br><span class="line">    <span class="string">&quot;qq&quot;</span> : <span class="number">1</span>,</span><br><span class="line">    <span class="string">&quot;wechat&quot;</span> : <span class="number">2</span>,</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><blockquote><p>map [string] int, indicating a single string string as the key and the number of strings int type as the value.</p></blockquote><p>Then, count the number of each string in the list of strings, here we have used the self-incrementing counts[arr[i]]++ to count the number of strings, equivalent:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">counts<span class="selector-attr">[arr[i]</span>] = counts<span class="selector-attr">[arr[i]</span>] + <span class="number">1</span></span><br></pre></td></tr></table></figure><blockquote><p>The advantage of this self-incrementation is that it eliminates the need for recurring variables and simplifies the code.</p></blockquote><p>Finally, we iterate to print out the strings that have a number of occurrences greater than 1.</p><p>Run the result:</p><blockquote><p>String with value wechat repeated 2 times</p></blockquote><p>for is the only way to loop inside go, and there are two common forms:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> init; condition; post &#123;</span><br><span class="line"></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><blockquote><p>The for loop is formed without parentheses, but the curly braces after the for statement are required, and the left curly brace must be on the same line as the post statement.</p></blockquote><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> i, v := <span class="keyword">range</span> array &#123;</span><br><span class="line"></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><blockquote><p>for range is used to quickly traverse data structures (arrays, slices, maps, etc.) with multiple elements, i is the index of the element (ranging from 0 to len(array)-1), and v is the value of the element.<br>That is, v &#x3D;&#x3D; array[i]</p></blockquote><h3 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h3><p>This article introduces the history of the Go language and its general application scenarios. It’s worth mentioning that distributed development and microservices architecture are still at the cutting edge, and many programmers are still in traditional companies and small computer enterprises, and can’t get in touch with new technologies and projects.</p><p>So I wrote Go this series, but also want to let more want to learn but no way and direction of the computer direction of the school personnel, or has embarked on the work of the practitioners can really understand the charm of Go. In many large Internet companies, such as Microsoft, Byte, Tencent, Kingsoft, Xunlei, including Huawei such as communications companies, in many business teams have begun to Go transition, Go will also become the hottest development language in the era of cloud computing!</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2942348d26214e2587eda93eba3311c6%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>While the dawn is still uncertain, while the moon is still hanging in the sky, take your backpack and set out with me!</p>]]></content>
    
    
    <summary type="html">In September 2007, Rob Pike, 61, and Ken Thompson, 74, were in Google Labs. Pike, 61, Ken Thompson, 74, and Robert Griesemer, 64. Rob Pike (61), Ken Thompson (74) and Robert Griesemer (64) got together at Google Labs. The three engineers got together and gave me the simple name of Go. They were father and mother, and they had a son, so I was always the &quot;set&quot; of the family.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="Crawler" scheme="https://www.nablepart.com/tags/Crawler/"/>
    
    <category term="absolutely" scheme="https://www.nablepart.com/tags/absolutely/"/>
    
    <category term="selenium" scheme="https://www.nablepart.com/tags/selenium/"/>
    
  </entry>
  
  <entry>
    <title>Interview salary suppression? That&#39;s because you don&#39;t understand multithreading and high concurrency.</title>
    <link href="https://www.nablepart.com/6a1f3371e93e/"/>
    <id>https://www.nablepart.com/6a1f3371e93e/</id>
    <published>2023-11-06T13:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>As developers, whether it is a job interview, or in the daily work, I believe that we are no stranger to high concurrency and multi-threading.</p><p>During job interviews, the backend job requirements that roll out of the sky often require us to <strong>familiarize ourselves with high concurrency and multi-process&#x2F;multi-threading</strong>:</p><p><img src="https://s2.loli.net/2023/11/07/HVMlbD9rdPBv3KY.webp"></p><p>In our daily work, with the rise and development of mobile Internet applications, the system tasks and problems we face are becoming more and more complex.</p><p>Whether we are building large-scale web applications, processing huge data sets, or developing high-performance games, we all need to deal with a common challenge: high concurrency.</p><h3 id="1-1-What-is-high-concurrency"><a href="#1-1-What-is-high-concurrency" class="headerlink" title="1.1 What is high concurrency?"></a>1.1 What is high concurrency?</h3><p>** High concurrency refers to a large number of users or programs accessing and using a service or resource at the same time period. **</p><p>This means that we need to handle a large number of requests, data, and tasks at the same time. How to handle this situation efficiently becomes a critical technical task.</p><p>High concurrency is an area of challenge, but also an area of opportunity.</p><h3 id="1-2-What-does-multithreading-have-to-do-with-high-concurrency"><a href="#1-2-What-does-multithreading-have-to-do-with-high-concurrency" class="headerlink" title="1.2 What does multithreading have to do with high concurrency?"></a>1.2 What does multithreading have to do with high concurrency?</h3><p>Solving the problem of high concurrency not only improves the performance of the system, but also improves the user experience and brings more business opportunities for the enterprise.</p><p>Multi-threading technology is one of the most important tools to address the challenge of high concurrency. **</p><p>Therefore, in this post, Xiao ❤ will take you together to explore high concurrency and multithreading in depth, and familiarize you with the working principle of multithreading, application scenarios, as well as practical solutions to solve the problem of high concurrency.</p><p>I believe that no matter you are a junior programmer or a developer with some experience, you will be able to find useful information in this article.</p><h2 id="2-High-Concurrency"><a href="#2-High-Concurrency" class="headerlink" title="2. High Concurrency"></a>2. High Concurrency</h2><h2 id="2-1-Concurrency-and-Parallelism"><a href="#2-1-Concurrency-and-Parallelism" class="headerlink" title="2.1 Concurrency and Parallelism"></a>2.1 Concurrency and Parallelism</h2><h4 id="Concurrency"><a href="#Concurrency" class="headerlink" title="Concurrency"></a>Concurrency</h4><p>Concurrency is the execution of multiple tasks in the same time period. On a single-core processor, multiple threads switch execution between themselves by means of time-slice rotation, resulting in a concurrent scenario.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/654db73469bd4dc2a47a289d9fa56709%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Equivalent to our video recordings, when the video frame rate is high enough (i.e., switching between multiple images in a second), our naked eye treats it as a continuous and smooth video.</p><h4 id="Parallel"><a href="#Parallel" class="headerlink" title="Parallel"></a>Parallel</h4><p>On a multicore processor, true concurrency can be achieved by multiple threads performing different tasks at the same time - that is, in parallel.</p><p><img src="https://s2.loli.net/2023/11/07/KYPIFmZ3aANpi9t.webp"></p><p>Parallelism is the execution of multiple tasks at the same moment, usually requiring a multi-core processor. <strong>Parallelism is a subset of concurrency, and true parallelism can only be achieved if the hardware supports multiple parallel execution units</strong>.</p><h4 id="Thinking-Questions"><a href="#Thinking-Questions" class="headerlink" title="Thinking Questions"></a>Thinking Questions</h4><p><strong>Scenario 1:</strong> Why is it difficult for us to focus on answering the phone while playing a game with an intense group battle?</p><p><strong>Scenario 2:</strong> We can listen to music while fiddling with the steering wheel without interfering with each other while driving. Why don’t you guess whether our brains run concurrently or in parallel?</p><h3 id="2-2-How-much-concurrency-is-too-much-concurrency"><a href="#2-2-How-much-concurrency-is-too-much-concurrency" class="headerlink" title="2.2 How much concurrency is too much concurrency?"></a>2.2 How much concurrency is too much concurrency?</h3><p>Having understood the concept of concurrency, let’s now talk about high concurrency.</p><p>High concurrency is a relative concept that depends on the performance and processing power of the system. ** Typically, a system is said to be highly concurrent when the number of requests or transactions it needs to handle exceeds its normal load. **</p><h3 id="2-3-Challenges-of-High-Concurrency"><a href="#2-3-Challenges-of-High-Concurrency" class="headerlink" title="2.3 Challenges of High Concurrency"></a>2.3 Challenges of High Concurrency</h3><p>While high concurrency brings many opportunities, it also comes with many challenges.</p><p>For example, a highly concurrent system needs to handle a large number of requests in a short period of time without degrading the performance or responsiveness of the system.</p><p>This may involve <strong>multiple users</strong> accessing a website at the same time, <strong>multiple clients</strong> requesting server data at the same time, or <strong>multiple threads</strong> accessing shared resources at the same time.</p><p>In a distributed system, whether it’s multiple users accessing, or multiple clients accessing a server, it all boils down to the business threads of each server accessing shared resources, so **high concurrency challenges are almost always related to multithreading. **</p><p>When faced with high concurrency, the following problems specifically arise.</p><p><img src="https://s2.loli.net/2023/11/07/JkSq59HPau3Ge4p.webp"></p><h4 id="1-Competing-conditions"><a href="#1-Competing-conditions" class="headerlink" title="1. Competing conditions"></a>1. Competing conditions</h4><p>Multiple threads accessing shared resources at the same time may lead to data inconsistency problems. For example, multiple threads depositing money into the same bank account at the same time may result in an incorrect balance.</p><h4 id="2-Deadlock"><a href="#2-Deadlock" class="headerlink" title="2. Deadlock"></a>2. Deadlock</h4><p>Multiple threads wait for each other to release resources, causing the system to stall. For example, thread A waits for thread B to release a lock, and thread B waits for thread A to release a lock, creating a deadlock.</p><h4 id="3-Resource-Contention"><a href="#3-Resource-Contention" class="headerlink" title="3. Resource Contention"></a>3. Resource Contention</h4><p>Multi-threaded access to shared resources can lead to resource contention problems that can degrade performance. For example, multiple threads competing for a database connection at the same time results in a slower database response.</p><h4 id="4-Thread-Safety"><a href="#4-Thread-Safety" class="headerlink" title="4. Thread Safety"></a>4. Thread Safety</h4><p>There is a need to ensure that no errors are raised when multiple threads access shared data. For example, in a multi-threaded environment, there is a need to ensure that reading and writing to data is safe.</p><h4 id="5-Debugging-Difficulty"><a href="#5-Debugging-Difficulty" class="headerlink" title="5. Debugging Difficulty"></a>5. Debugging Difficulty</h4><p>Since the order of execution of multithreading is uncertain, problems may appear at different times. So debugging of multithreaded programs is relatively complex and problems are difficult to reproduce.</p><h3 id="2-4-Solving-High-Concurrency-Problems"><a href="#2-4-Solving-High-Concurrency-Problems" class="headerlink" title="2.4 Solving High Concurrency Problems"></a>2.4 Solving High Concurrency Problems</h3><p>In order to solve the high concurrency problem, appropriate techniques and methods need to be used, which are as follows.</p><p><img src="https://s2.loli.net/2023/11/07/YIw83fNqVbmaA7U.webp"></p><h4 id="1-Lock-Mechanism"><a href="#1-Lock-Mechanism" class="headerlink" title="1. Lock Mechanism"></a>1. Lock Mechanism</h4><p>Locks are used to <strong>protect shared resources</strong> by ensuring that only one thread can access them at a time.</p><p>Locks can be categorized as <code>&amp;#x4E92;&amp;#x65A5;&amp;#x9501;</code> and <code>&amp;#x8BFB;&amp;#x5199;&amp;#x9501;</code>. Mutual-exclusion locks are used to exclusively occupy a resource, while read&#x2F;write locks allow more than one thread to read a resource at the same time, but only one thread is allowed to write to it.</p><p>Specific implementation details can be found using the locking mechanisms provided by the programming language, such as the <code>synchronized</code> keyword in Java or <code>threading.Lock</code> in Python.</p><h4 id="2-Concurrent-Data-Structures"><a href="#2-Concurrent-Data-Structures" class="headerlink" title="2. Concurrent Data Structures"></a>2. Concurrent Data Structures</h4><p>Use concurrent data structures such as concurrent queues and hash tables to <strong>reduce resource contention</strong>. These data structures are optimized to work efficiently in a multithreaded environment.</p><p>For example, Java provides <code>ConcurrentHashMap</code>, which is a thread-safe hash table that can be used in highly concurrent environments without explicit locking.</p><h4 id="3-Thread-Pooling"><a href="#3-Thread-Pooling" class="headerlink" title="3. Thread Pooling"></a>3. Thread Pooling</h4><p>Manage and reuse threads to improve performance. A thread pool controls the number of threads, <strong>to avoid wasting resources by having too many threads</strong>.</p><p>In Java, you can use <code>ExecutorService</code> to create and manage thread pools. This avoids frequent creation and destruction of threads and improves efficiency.</p><h4 id="4-Message-Passing"><a href="#4-Message-Passing" class="headerlink" title="4. Message Passing"></a>4. Message Passing</h4><p>Inter-thread communication through message passing model avoids shared memory. Message passing ensures that data is passed safely between threads, <strong>reducing competing conditions</strong>.</p><p>For example, in Go, you can use channels (<code>channel</code>) for messaging to ensure the safe passing of data.</p><h4 id="5-Atomic-operations"><a href="#5-Atomic-operations" class="headerlink" title="5. Atomic operations"></a>5. Atomic operations</h4><p>Atomic operations are indivisible operations that ensure that <strong>multiple threads operating on shared variables are safe</strong>. Atomic operations are often supported by third-party libraries or features that can be used to implement various synchronization mechanisms.</p><p>In C&#x2F;C++, you can use atomic operations to manipulate shared variables, for example with the <code>atomic</code> library. In MySQL, the transaction threads of the <code>InnoDB</code> engine can carry their own atomicity features.</p><h2 id="3-Multithreading"><a href="#3-Multithreading" class="headerlink" title="3. Multithreading"></a>3. Multithreading</h2><h2 id="3-1-Processes-and-Threads"><a href="#3-1-Processes-and-Threads" class="headerlink" title="3.1 Processes and Threads"></a>3.1 Processes and Threads</h2><p>When a task in concurrent work is completed, it will be switched from one segment of the program to another to be executed, and a series of states of the previous segment of the program operation will be lost if not saved, so the operating system introduces processes to carry out resource isolation.</p><h4 id="1-Processes"><a href="#1-Processes" class="headerlink" title="1. Processes"></a>1. Processes</h4><p><strong>Processes are used to delineate the basic unit of resources required when a program runs</strong>, it has an independent address space, an independent stack, when the process is switched, it can ensure that the respective data storage is not affected.</p><p>Because the process involves the consumption of a large number of resources, it is strictly controlled by the computer operating system (can be understood as: the approval of land resources in each province and city, are very cautious, especially the first-tier cities, so by the core department of unified control).</p><p>Therefore, ** process switching occurs in the kernel state, by the computer core program to unify the scheduling. **</p><blockquote><p><strong>Trivia:</strong><br>The operating system is divided into kernel state and user state, and the <code>CPU</code> (Central Processing Unit) in the kernel state can access arbitrary data.<br>The CPU in the kernel state can access any data, &gt; including peripheral devices such as network cards and hard disks, and there is no preemption of the occupied CPU.<br>Whereas a CPU in the user state can only access memory in a restricted manner and is not allowed to access peripheral devices, and the CPU in the user state may be seized by other programs.</p></blockquote><h4 id="2-Threads"><a href="#2-Threads" class="headerlink" title="2. Threads"></a>2. Threads</h4><p>When a process is switched, the kernel state has to be switched, so it consumes a lot of resources, for which the concept of threads is introduced.</p><p>A <strong>thread is the smallest unit of operating system scheduling and is an execution process within a program</strong>. A process can contain multiple threads, which share process resources such as memory space and file handles, but each has a separate stack memory.</p><p>The thread itself takes up almost no resources, it ** shares address space and heap ** with other threads in the process, so the scheduling time consumption is relatively small, but it has an independent CPU context (including CPU registers, program counters, etc.).</p><p>Threads are like sharing the same land resources with the threads inside the same process, but threads have their own office buildings, and switching between threads is unified scheduling by the operating system.</p><blockquote><p>Trivia: Threads are divided into kernel threads and user threads, and user threads must be bound to kernel threads before they can run.</p></blockquote><h3 id="3-2-Multithreading-Concepts"><a href="#3-2-Multithreading-Concepts" class="headerlink" title="3.2 Multithreading Concepts"></a>3.2 Multithreading Concepts</h3><p>Multithreading is a type of concurrent execution that allows a program to be divided into multiple independent threads, each of which can perform tasks independently. It’s like being able to work on a piece of land resource at the same time without affecting each other.</p><h4 id="1-Creating-and-managing-threads"><a href="#1-Creating-and-managing-threads" class="headerlink" title="1. Creating and managing threads"></a>1. Creating and managing threads</h4><p>Creating and managing threads involves the scheduling mechanism of the operating system, which is implemented in different ways in different programming languages. Let’s take Python as an example:</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">import</span> threading</span></span><br><span class="line"><span class="function"></span></span><br><span class="line"><span class="function">def <span class="title">my_function</span><span class="params">()</span>:</span></span><br><span class="line"><span class="function">  </span></span><br><span class="line"><span class="function"></span></span><br><span class="line"><span class="function">thread =</span> threading.<span class="built_in">Thread</span>(target=my_function)</span><br><span class="line">thread.<span class="built_in">start</span>()  </span><br></pre></td></tr></table></figure><h4 id="2-Thread-synchronization-and-mutual-exclusion"><a href="#2-Thread-synchronization-and-mutual-exclusion" class="headerlink" title="2. Thread synchronization and mutual exclusion"></a>2. Thread synchronization and mutual exclusion</h4><p>When multiple threads access a shared resource at the same time **, it may lead to a race condition **, where multiple threads compete with each other for the resource, which may result in inconsistent data.</p><p>To solve this problem, we use a locking mechanism to ensure that only one thread can access the shared resource at the same time.</p><figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">import threading</span><br><span class="line"></span><br><span class="line"><span class="keyword">lock</span> = threading.Lock()</span><br><span class="line"></span><br><span class="line"><span class="function">def <span class="title">my_function</span>():</span></span><br><span class="line"><span class="function">   <span class="keyword">lock</span>.<span class="title">acquire</span>()  </span></span><br><span class="line"><span class="function"></span></span><br><span class="line"><span class="function">   <span class="keyword">lock</span>.<span class="title">release</span>() </span></span><br></pre></td></tr></table></figure><h3 id="3-3-Multi-threaded-applications"><a href="#3-3-Multi-threaded-applications" class="headerlink" title="3.3 Multi-threaded applications"></a>3.3 Multi-threaded applications</h3><p>Multi-threading can not only improve the performance of the program, but also improve the user experience. In real life, we often encounter multithreaded application scenarios.</p><h4 id="1-Web-server"><a href="#1-Web-server" class="headerlink" title="1. Web server"></a>1. Web server</h4><p>Imagine a popular social media site with millions of users visiting the site at different times at the same time.</p><p>These users request different pages, upload photos, make posts, and also have some background tasks such as data backup, new post push, and so on.</p><p>At this point, the **Web server needs to handle requests from multiple users at the same time. Each user request can be seen as a thread, and multithreading allows the server to respond to multiple requests at the same time. **</p><p>For example, one user may request to view their profile, while another user may request to post a new status update. These two requests can be handled by different threads at the same time, improving the server’s response time.</p><h4 id="2-Database-system"><a href="#2-Database-system" class="headerlink" title="2. Database system"></a>2. Database system</h4><p>Suppose an online banking system where thousands of customers access their account information simultaneously, checking balances, transferring funds, and so on. In addition, the banking system needs to handle customers’ deposit and withdrawal operations.</p><p>At this point, the <strong>database system needs to handle multiple customer requests</strong> at the same time. Each customer request can be regarded as a thread, and multiple threads can query the database at the same time to ensure that each customer’s account information is up-to-date.</p><h4 id="3-Game-Interaction"><a href="#3-Game-Interaction" class="headerlink" title="3. Game Interaction"></a>3. Game Interaction</h4><p>A multiplayer online game where dozens of players participate in the game at the same time. This game needs to handle player actions, physics simulation, AI computation, and multiplayer game interaction simultaneously.</p><p>At this point, the <strong>game engine can use multiple threads to handle different aspects of the task</strong>. One thread can be responsible for rendering the game graphics, another thread can handle the player’s actions, and another thread can be responsible for simulating the physics effects in the game.</p><p>With multi-threading, the feedback of the game system will be smoother and the player can enjoy a highly interactive game experience.</p><h2 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h2><p>When talking about multithreading and concurrency, it’s like the busy streets of our daily lives, where everyone is dealing with their own things, but at the same time need to coordinate their interactions with others.</p><p><img src="https://s2.loli.net/2023/11/07/RnMyHUVpPFILv7t.webp"></p>]]></content>
    
    
    <summary type="html">As developers, whether in job interviews or in their daily work, I&#39;m sure you&#39;re no stranger to high concurrency and multithreading.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Interview" scheme="https://www.nablepart.com/tags/Interview/"/>
    
    <category term="developers" scheme="https://www.nablepart.com/tags/developers/"/>
    
    <category term="multithreading" scheme="https://www.nablepart.com/tags/multithreading/"/>
    
    <category term="interviews" scheme="https://www.nablepart.com/tags/interviews/"/>
    
    <category term="stranger" scheme="https://www.nablepart.com/tags/stranger/"/>
    
    <category term="daily" scheme="https://www.nablepart.com/tags/daily/"/>
    
  </entry>
  
  <entry>
    <title>If I ask about microservices, how should Your Excellency respond?</title>
    <link href="https://www.nablepart.com/1bb554f53f98/"/>
    <id>https://www.nablepart.com/1bb554f53f98/</id>
    <published>2023-11-06T12:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><strong>Catalog</strong></p><ol><li>Introduction</li><li>why microservices are needed</li><li>Service Discovery</li><li>Inter-Service Communication</li><li>Microservice Splitting</li></ol><h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-The-Status-of-Microservices"><a href="#1-1-The-Status-of-Microservices" class="headerlink" title="1.1 The Status of Microservices"></a>1.1 The Status of Microservices</h2><p>In the current Internet world, no matter whether you are doing 2B-end [for enterprises] or 2C-end [for individual users] products, microservices development has blossomed everywhere, and most of the monolithic structure of the system only exists in the traditional software and medium-to-large-scale state-owned enterprise architectures.</p><p>But do you really understand microservices?</p><p><img src="https://s2.loli.net/2023/11/07/zVJEpgSHcO7LXm1.webp"></p><p>Let’s take a look at the requirements for microservices in a backend development engineer job JD:</p><blockquote><p>**Candidates need to be proficient in the use of microservices frameworks, deeply understand the principles and operation mechanisms of microservices, and have unique insights into service splitting, inter-service calls, and service governance. **</p></blockquote><p>Is it feeling cloudy or half-understood? It’s okay, we’ll get it after reading this article 😉</p><h2 id="1-2-What-are-Microservices"><a href="#1-2-What-are-Microservices" class="headerlink" title="1.2 What are Microservices"></a>1.2 What are Microservices</h2><p>There are many definitions of microservices, the official one is as follows:</p><blockquote><p>Microservices are <strong>an architectural and organizational approach</strong> to developing software that consists of small, independent services communicating through well-defined APIs that are handled by small, independent teams. Microservices architecture makes applications easier to scale and faster to develop, thereby accelerating innovation and reducing time to market for new features.</p></blockquote><p>In layman’s terms, microservices are an <strong>architectural pattern</strong>, or an <strong>architectural style</strong>. It advocates the division of a <strong>single application into a number of smaller services, each of which is managed independently</strong>, with services and services cooperating and coordinating with each other, ultimately providing users with a rapidly iterative go-live product.</p><p>For example, an e-commerce system may contain micro-services such as users, commodities, warehousing, orders, payments, and points systems:</p><p><img src="https://s2.loli.net/2023/11/07/9t5F8CXKEpUaBsO.webp"></p><p>Today we will talk about microservices in detail and the knowledge related to microservices interview. I believe that after reading this article, no matter whether you are dealing with an interview with a big internet company or a state-owned foreign enterprise, microservices are earned when asked 💰</p><h1 id="2-Why-we-need-microservices"><a href="#2-Why-we-need-microservices" class="headerlink" title="2. Why we need microservices"></a>2. Why we need microservices</h1><h2 id="2-1-The-problem-with-monolithic-architectures"><a href="#2-1-The-problem-with-monolithic-architectures" class="headerlink" title="2.1 The problem with monolithic architectures"></a>2.1 The problem with monolithic architectures</h2><p>In the beginning, when there were no microservices, applications in the Internet world used a monolithic architecture, i.e., all the modules under the system were put into a single service.</p><p><img src="https://s2.loli.net/2023/11/07/uhJr2dYnbVs5y8X.webp"></p><p>Often, these services are as small as a few, or as large as dozens of modules, and teams of dozens of people work together to develop them. At this time, the problems of monolithic architecture are gradually exposed:</p><ol><li>systems are usually accessed by APIs, and the tight coupling makes it difficult to maintain and expand;</li><li>each business area needs to use the same technology stack, it is difficult to quickly apply new technologies, such as using Java or PHP;</li><li>System modifications must be deployed&#x2F;upgraded together with the whole system, which makes operation and maintenance complex and error-prone;</li><li>When the system load increases, it is difficult to scale horizontally;</li><li>If there is a problem in one part of the system, it may affect the whole system and cause collateral problems.</li></ol><p>Thus, in 2012, the concept of microservices was proposed as a way to accelerate the development of Web and mobile applications, and began to attract much attention.</p><p>Time came to 2015, more and more Internet communities, forums, and Internet giants began to use microservices, and from 2018, more and more small and medium-sized enterprises also began to upgrade their microservices architecture.</p><p>Today, in 2023, according to a recent market research report, 95% of the applications in the market have been developed using microservices architecture.</p><h2 id="2-2-Disadvantages-of-Microservices"><a href="#2-2-Disadvantages-of-Microservices" class="headerlink" title="2.2 Disadvantages of Microservices"></a>2.2 Disadvantages of Microservices</h2><h3 id="1-High-code-complexity"><a href="#1-High-code-complexity" class="headerlink" title="1) High code complexity"></a>1) High code complexity</h3><p><strong>Microservices interact with each other via HTTP, RPC, etc.</strong> Compared with the API form of monolithic architecture, it is necessary to take into account the failure of the called party, overloading, loss of messages, etc., and the code logic is more complex.</p><p>Transactional operations between microservices need to solve the problem of distributed transactions, and may need to introduce two-stage, best-effort notification and other ideas to solve the problem.</p><p>When there are a few functions (or database fields) overlapping between microservices, but they cannot be extracted into microservices, it is usually necessary to repeat the development, or do data redundancy, which increases the development and maintenance costs.</p><h2 id="2-Heavy-operation-and-maintenance-tasks"><a href="#2-Heavy-operation-and-maintenance-tasks" class="headerlink" title="2) Heavy operation and maintenance tasks"></a>2) Heavy operation and maintenance tasks</h2><p>At the time of launch, there may be service coupling relationships that require sequential and orderly service deployment. The most common situation is that A service calls the interface of B service, i.e., A depends on B. When you go online, you have to upgrade the B system first and then the A system.</p><p>Moreover, since the system is independent, a well-designed monitoring system is needed to monitor the operation status of each microservice. <strong>The purpose of real-time monitoring is to prevent modules in the business system from dropping out temporarily, which affects the overall business utilization</strong>.</p><h3 id="3-Impact-on-performance"><a href="#3-Impact-on-performance" class="headerlink" title="3) Impact on performance"></a>3) Impact on performance</h3><p>Compared to the monolithic architecture, the delay of REST and RPC communication between microservices is higher, especially when the call chain is long.</p><p>Moreover, when the business chain is long, it is more difficult to troubleshoot problems, and if multiple services are involved in the BUG, troubleshooting will be more complicated.</p><ul><li><h2 id="2-2-Characteristics-Principles-of-Microservices"><a href="#2-2-Characteristics-Principles-of-Microservices" class="headerlink" title="2.2 Characteristics &amp; Principles of Microservices"></a>2.2 Characteristics &amp; Principles of Microservices</h2><p>Although microservices have disadvantages that do not exist in monolithic architectures, its advantageous characteristics of allowing high cohesion and low coupling of functional modules are enough for us to endure these to embrace it. In addition to this, microservices have the following characteristics:</p><h3 id="1-Single-responsibility"><a href="#1-Single-responsibility" class="headerlink" title="1) Single responsibility"></a>1) Single responsibility</h3><p>Generally, microservices are divided according to business logic, and each microservice is only responsible for the functions belonging to its own business domain, with clear logic and high module cohesion. For example, the user system, product system, order system, payment system, etc. mentioned above, each module has its own business characteristics.</p><h3 id="2-Autonomy"><a href="#2-Autonomy" class="headerlink" title="2) Autonomy"></a>2) Autonomy</h3><p>Microservices are independent entities that can be deployed and upgraded alone, and each microservice communicates with each other through standard interfaces in the form of REST&#x2F;RPC. And microservices can be realized with different technology stacks, and other modules are not affected.</p><p>Most commonly, many algorithm modules are implemented in Python, and the background business is implemented in Go or Java <strong>When the background module needs to call the interface of the algorithm module, as long as the protocol of the API interface is defined, the API call can be made in a friendly way</strong>.</p><p>Through isolation, meltdown and other techniques to ensure that a microservice exception does not affect other modules, which is also a microservice autonomy of the bottom of the pocket strategy.</p><h3 id="3-Scalable"><a href="#3-Scalable" class="headerlink" title="3) Scalable"></a>3) Scalable</h3><p>With business growth, a module can be expanded horizontally or vertically to facilitate elastic scaling and gray-scale release.</p><p>In different business scenarios, the pressure of different modules is inconsistent, for example, the e-commerce system under the limited-time spike business: the commodity system and the storage system will bear most of the traffic pressure, while the payment system may be more relaxed.</p><p>In order to cope with similar scenarios, we can <strong>distribute the modules of the merchandise system and the storage system to multiple machine deployments, and equally distribute the traffic pressure to different business machines to alleviate the high concurrency pressure of the modules</strong>.</p><h3 id="4-Flexible-Combination"><a href="#4-Flexible-Combination" class="headerlink" title="4) Flexible Combination"></a>4) Flexible Combination</h3><p>Under the microservice architecture, functional reuse can be achieved by combining existing microservices. Although this can also be realized under the monolithic architecture, different combinations may make the interactions between monolithic architecture modules more and more complex, and ultimately make the system chaotic.</p><h1 id="3-Service-Discovery"><a href="#3-Service-Discovery" class="headerlink" title="3. Service Discovery"></a>3. Service Discovery</h1><h2 id="3-1-Service-Governance"><a href="#3-1-Service-Governance" class="headerlink" title="3.1 Service Governance"></a>3.1 Service Governance</h2><p>As applications evolve from monolithic architectures to microservices, the demand for service governance such as <strong>service discovery, load balancing, and fusion-limiting</strong> between microservices increases significantly due to the substantial growth in the number of fine-grained microservice applications.</p><p>In a microservice scenario, each service has multiple instances, and a mechanism is needed to resolve the requested service name to the corresponding service instance address, which requires service discovery and load balancing mechanisms.</p><h2 id="3-2-Service-Discovery"><a href="#3-2-Service-Discovery" class="headerlink" title="3.2 Service Discovery"></a>3.2 Service Discovery</h2><p>Service discovery consists of two parts:</p><ul><li>Service registration: each service name sends information about the service instance to the registry and provides a heartbeat mechanism to ensure that the service is online;</li><li>Service discovery: get the list of instances corresponding to the service from the registry.</li></ul></li></ul><p><img src="https://s2.loli.net/2023/11/07/X47SnpDMhJtWyLr.webp"></p><p>3 common strategies for service discovery:</p><ol><li>ETCD as a registry, each service module registers the service name and instance information to ETCD, and keeps its own lease unexpired to maintain the service online;</li><li>Redis as the registry, each module maintains a timer and sends instance information and time to the registry module periodically, and the registry module derives the maximum timeout time based on the heartbeat time and network fluctuation time. If a module does not send the heartbeat message packet to the registry for a long time, it will be kicked out of the service cluster;</li><li>Istio service registration and discovery is accomplished by the control plane Pilot and the data plane Envoy, the Pilot obtains service resource information such as service and endpoint through the K8s APIServer interface, and converts it into xDS messages to be sent to the Envoy component in the data plane, and the Envoy selects a service instance to be requested according to the load balancing policy configured when it receives the request. When the Envoy receives the request, it selects a service instance to forward the request according to the configured load balancing policy.</li></ol><h2 id="3-3-Load-Balancing"><a href="#3-3-Load-Balancing" class="headerlink" title="3.3. Load Balancing"></a>3.3. Load Balancing</h2><h3 id="1-What-is-load-balancing"><a href="#1-What-is-load-balancing" class="headerlink" title="1) What is load balancing?"></a>1) What is load balancing?</h3><p>Load balancing is generally used in conjunction with <strong>service discovery</strong> to select one of the service discovery parsed instances to initiate a request, and this process uses a load balancing policy.</p><p><img src="https://s2.loli.net/2023/11/07/zHJdRa3kUVMIBxL.webp"></p><p>As shown in the figure above, when the business deploys 4 commodity system services, when the request arrives at the registration center, **the registration center will hit the request to different servers **according to different load balancing algorithms to ensure that the load on each machine is not too high to affect performance.</p><h3 id="2-Load-Balancing-Algorithm"><a href="#2-Load-Balancing-Algorithm" class="headerlink" title="2) Load Balancing Algorithm"></a>2) Load Balancing Algorithm</h3><p>Compare balanced forwarding to a teacher handing out candy, where the teacher is the registry and the students are the servers. Load balancing algorithm is a way for the teacher to give out candy to the students, neither letting the students starve nor giving candy to only one student [it’s not good to have too much blood sugar].</p><h4 id="1-polling-method"><a href="#1-polling-method" class="headerlink" title="1, polling method"></a><strong>1, polling method</strong></h4><p>Assigns requests to back-end servers in sequential rotation, equalizing each back-end server, <strong>doesn’t care about the number of connections to the server or the current system load</strong>.</p><p>Using the analogy of a teacher handing out candy, the polling method is to hand out candy to students one by one, student 1, student 2, student 3 …… The rain falls evenly, and each student gets an almost equal number of candies.</p><h4 id="2-randomized-method"><a href="#2-randomized-method" class="headerlink" title="2, randomized method"></a><strong>2, randomized method</strong></h4><p>Randomly select a machine to visit, by the probability statistics know, <strong>When the number of calls, the more its allocation calls closer to the average, that is, the result of polling</strong>.</p><p>Using the analogy of a teacher handing out candy, the random method is that the teacher draws lots to decide who to give the candy to each time before handing it out, and each student gets an indeterminate number of candies. But probabilistically, when there are a lot of requests, the result is close to the average, similar to rolling dice, or tossing a coin.</p><h4 id="3-weighted-polling-method"><a href="#3-weighted-polling-method" class="headerlink" title="3, weighted polling method"></a><strong>3, weighted polling method</strong></h4><p>According to the hardware configuration and load of the server, the machine with high performance and low load is given a higher weight, making the machine with high weight more accessible.</p><p><strong>When the number of requests is large, the ratio of the number of requests handled by each service converges to the ratio of weights</strong>.</p><p>Using the analogy of a teacher handing out candy, the weighted polling method looks at the teacher’s preferences for handing out candy, where the weights can be thought of as the student’s score. The teacher might give 3 candies to students who score 90 or more, 2 candies to students who score 75 or more, 1 candy to students who score 60 or more, and no candy to students who fail.</p><p>The ratio of the number of candies each student receives is based on the weights, so when the number of requests is higher, the ratio of the number of candies each student receives to the weights is closer.</p><h4 id="4-source-address-hashing-method"><a href="#4-source-address-hashing-method" class="headerlink" title="4, source address hashing method"></a><strong>4, source address hashing method</strong></h4><p>Get the IP address of the client and access a particular server by hash taking the mode. <strong>When the list of back-end servers and the hash algorithm are unchanged, the same client is mapped to the same server for each request</strong>.</p><p>To use the analogy of a teacher handing out candy, the source address hash method means that the teacher hands out candies of the same origin or brand to one student at a time, e.g., Heijirou candies are given to student 1, Shanghaojia candies are given to student 2, and so on. …… The number of candies each student gets depends on the number of candies of the corresponding brand.</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><ul><li><h1 id="4-Inter-service-calls"><a href="#4-Inter-service-calls" class="headerlink" title="4. Inter-service calls"></a>4. Inter-service calls</h1><p>We have already introduced the characteristics of microservices above: ** Microservices interact with each other through HTTP, RPC, etc. **.</p><p>HTTP (HyperText Transfer Protocol) and RPC (Remote Procedure Call), the former is a protocol and the latter is a method, both of which are commonly used for service invocation.</p><p>RPC works on top of the TCP protocol (or HTTP), while HTTP works on top of the HTTP protocol, which is based on the TCP transport layer protocol, so RPC is naturally lighter and more efficient than HTTP.</p><h2 id="4-1-HTTP"><a href="#4-1-HTTP" class="headerlink" title="4.1 HTTP"></a>4.1 HTTP</h2><h3 id="1-Introduction-1"><a href="#1-Introduction-1" class="headerlink" title="1) Introduction"></a>1) Introduction</h3><p>HTTP service development, i.e. developing RESTful style service interfaces, is a common means of communication for information transfer when there are few interfaces and little interaction between systems.</p><h3 id="2-Advantages"><a href="#2-Advantages" class="headerlink" title="2) Advantages"></a>2) Advantages</h3><p>The advantages of HTTP interfaces are <strong>simple, direct, easy to develop, and can be transported using the ready-made HTTP protocol</strong>. When developing a service, it is necessary to agree on an interface document that strictly defines the inputs and outputs and specifies the request methods and parameters of the interface.</p><h2 id="4-2-RPC"><a href="#4-2-RPC" class="headerlink" title="4.2 RPC"></a>4.2 RPC</h2><h3 id="1-Introduction-2"><a href="#1-Introduction-2" class="headerlink" title="1) Introduction"></a>1) Introduction</h3><p>**First of all, we need to know what is RPC? **</p><blockquote><p>RPC (Remote Procedure Call) is a <strong>computer communication protocol</strong>. The protocol allows a program running on one computer to call a subroutine in another address space (usually a computer on an open network), and the programmer calls it as if it were a local program, without having to additionally program for this interaction (no need to pay attention to the details). –Wikipedia</p></blockquote><p>A lot of non-specialists must have been confused when they saw this big explanation, as I was at first. It’s not hard to understand RPC, we just need to know that it’s a <strong>communication protocol</strong>, i.e. a format or convention that both parties need to follow.</p><h3 id="2-Characteristics"><a href="#2-Characteristics" class="headerlink" title="2) Characteristics"></a>2) Characteristics</h3><p>Some people may still wonder: since they are both communication protocols, should we choose HTTP (HyperText Transfer Protocol) or RPC for program interaction and application development?</p><p>This starts from the difference of their properties, which are mainly considered from the following 4 points:</p><h4 id="transfer-protocol"><a href="#transfer-protocol" class="headerlink" title="transfer protocol"></a>transfer protocol</h4><ul><li>RPC is a communication protocol based on the TCP transport layer or the HTTP2 application layer;</li><li>HTTP is based on HTTP protocol only, including HTTP1.x (i.e. HTTP1.0, 1.1) and HTTP2, and many browsers currently use 1.x by default to access server data.</li></ul><h4 id="Performance-Consumption-from-data-type-comparison"><a href="#Performance-Consumption-from-data-type-comparison" class="headerlink" title="Performance Consumption (from data type comparison)"></a>Performance Consumption (from data type comparison)</h4><ul><li>RPC, can be based on gRPC (a RPC framework) to achieve efficient binary transmission;</li><li>HTTP, mostly implemented via json, where byte size and serialization consume more performance than gRPC.</li></ul><h4 id="load-balancing"><a href="#load-balancing" class="headerlink" title="load balancing"></a>load balancing</h4><ul><li>RPC, basically comes with a load balancing strategy;</li><li>HTTP, need to configure Nginx, HAProxy to realize.</li></ul><h4 id="Transport-Efficiency"><a href="#Transport-Efficiency" class="headerlink" title="Transport Efficiency"></a>Transport Efficiency</h4><ul><li>RPC, use customized TCP protocol, can make the request message size smaller, or use HTTP2 protocol, can also be very good to reduce the size of the message, improve the transmission efficiency;</li><li>HTTP, if it is based on HTTP1.x protocol, the request will contain a lot of useless content; if it is based on HTTP2.0, then a simple encapsulation can be used as RPC, which is when the standard RPC framework has more advantages of service governance.</li></ul><h3 id="3-Popular-RPC-Frameworks"><a href="#3-Popular-RPC-Frameworks" class="headerlink" title="3) Popular RPC Frameworks"></a>3) Popular RPC Frameworks</h3><ul><li>gRPC: based on the HTTP2.0 protocol, the underlying use of the Netty framework;</li><li>Thrift: cross-language service development framework, through the code generator to save a series of basic development work;</li><li>Dubbo: protocol and serialization framework can be plugged and unplugged.</li></ul><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p></li></ul><h2 id="4-3-Differences-between-HTTP-and-RPC"><a href="#4-3-Differences-between-HTTP-and-RPC" class="headerlink" title="4.3 Differences between HTTP and RPC"></a>4.3 Differences between HTTP and RPC</h2><p>In summary, we can easily find that <strong>RPC is stronger than HTTP in terms of performance consumption and transmission efficiency, as well as load balancing</strong>. At this point, careful friends may have found, then why our common systems and websites are using the HTTP protocol, not changed to RPC communication?</p><p>To give a common example, HTTP is like Mandarin, RPC is like a local dialect, such as Cantonese, southwest of Yunnan, Guizhou, Sichuan.</p><p>The advantage of speaking Mandarin is that everyone understands it, and most people speak it, so <strong>HTTP has a certain universality</strong>. Speaking dialect, the advantage is that it can be more concise, more confidential, more customizable, the disadvantage is that the other party who “speaks” the dialect (especially the client side) must also understand, and once everyone speaks a dialect, it is difficult to change the dialect.</p><p>So <strong>RPC is generally used for internal service calls</strong>, such as between service A and service B in Alibaba’s Taobao system.</p><p>The concept of microservices emphasizes more on independence, autonomy and flexibility, but RPC has more restrictions. Therefore ** microservice frameworks generally use HTTP RESTful calls, except for systems with high efficiency requirements **.</p><h1 id="5-Service-Splitting"><a href="#5-Service-Splitting" class="headerlink" title="5. Service Splitting"></a>5. Service Splitting</h1><p>Through the above description, we must have familiarized ourselves with the basic features of microservices from several levels of microservice introduction, service governance, service discovery and inter-service communication.</p><p>** Then as an architect&#x2F;intermediate and advanced programmer, how do we go about distinguishing the boundary issues of each module and how to split microservices? **</p><p>Next, we introduce three common ways to split microservices.</p><h2 id="5-1-Business-Domain"><a href="#5-1-Business-Domain" class="headerlink" title="5.1 Business Domain"></a>5.1 Business Domain</h2><p>Splitting microservices by business domain (also called pendant splitting), such as user, mall, order business modules, if there is the same functionality needs to be aggregated, it is carried out to sink to a separate microservice, unified call.</p><p>The benefits of splitting microservices by business are, <strong>high cohesion and no easy coupling between businesses</strong>.</p><h2 id="5-2-Functional-Positioning"><a href="#5-2-Functional-Positioning" class="headerlink" title="5.2 Functional Positioning"></a>5.2 Functional Positioning</h2><p>Split microservices by functional positioning (also called horizontal splitting), such as login and registration, user shopping, and points redemption. If the same module is used by more than one function, you can further split the microservices and call them uniformly.</p><p>The benefits of splitting microservices by function localization are: ** high development efficiency, more independent testing and use of different functions**.</p><p>For users, when using a certain type of system function, they often call the API of the same microservice, which can better avoid collateral problems and distributed transaction problems.</p><h2 id="5-3-Level-of-Importance"><a href="#5-3-Level-of-Importance" class="headerlink" title="5.3 Level of Importance"></a>5.3 Level of Importance</h2><p>Split microservices according to the degree of importance, distinguishing between core and non-core modules, e.g.: e-commerce system inside the order module core, logistics module non-core. The key points to distinguish whether a module is core or not are:</p><ol><li><strong>Is it indispensable</strong>. Such as the e-commerce system, the user’s most concerned about the function of online shopping, orders is to save the user’s shopping records of the core business, naturally, is very important.</li><li><strong>High user attention</strong>, i.e., high traffic, can bring exposure for the system. For such a module (e.g., e-commerce display module), it will greatly affect the user’s judgment of the overall product quality, which is also extremely important.</li></ol><p>In addition, the industry on the topic of microservice splitting has a relatively well-established system design methodology, such as the famous <strong>DDD (Domain-driven design)</strong> .</p><blockquote><p>Its principle is to establish domain model through event storm, reasonably divide <strong>domain logical and physical boundaries</strong>, establish domain object and service matrix and service architecture diagram, define code structure model that conforms to the idea of DDD layered architecture, and ensure the consistency of business model and code model.</p></blockquote><p>Due to personal understanding and the limited length of the article, I will not expand on the discussion here. If you want to understand the friends move a small hand point a concern, the follow-up time can be arranged.</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p>]]></content>
    
    
    <summary type="html">Currently in the Internet world, whether it is to do 2B-end [enterprise-oriented], or 2C-end [for individual users] products, microservices development has been everywhere. But do you really understand microservices?</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="microservices" scheme="https://www.nablepart.com/tags/microservices/"/>
    
    <category term="Currently" scheme="https://www.nablepart.com/tags/Currently/"/>
    
    <category term="Internet" scheme="https://www.nablepart.com/tags/Internet/"/>
    
    <category term="everywhere" scheme="https://www.nablepart.com/tags/everywhere/"/>
    
  </entry>
  
  <entry>
    <title>I heard you know architecture design? Come on, explain why it&#39;s not Li Jiaqi&#39;s fault.</title>
    <link href="https://www.nablepart.com/37d3d5b120da/"/>
    <id>https://www.nablepart.com/37d3d5b120da/</id>
    <published>2023-11-06T11:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><p>Today, I’m going to talk to you about refactoring, the mysterious skill of programmers! Don’t worry, I’m going to help you understand and master this skill with easy-to-understand language and some fun conversations that my 8-year-old niece says she understands.</p><h2 id="1-1-Background"><a href="#1-1-Background" class="headerlink" title="1.1 Background"></a>1.1 Background</h2><p>Code Development:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2a84de458d4d4fcb80fc883fa7ed21a7%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>One month later:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/9aea7c9997a048768166d4805cdf47e7%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Later <strong>there’s time</strong> change it (don’t worry, there won’t be time for that, and I won’t change it when there is time).</p><p>Six months later:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/61eb652ce31c47c3ae56d8c0fa1c7129%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As above, it is a scenario that any developer will experience: the early code simply cannot be reviewed, or else it will surely fall into deep suspicion, is such a bad code really from their own hands?</p><p>What’s more, most of the current systems are collaborative development, and each programmer’s naming conventions and coding habits are different, leading to a situation of one system code, multiple flavors.</p><h4 id="What-is-refactoring"><a href="#What-is-refactoring" class="headerlink" title="What is refactoring"></a>What is refactoring</h4><p>Yeon: Hey uncle, I heard you’re going to share refactoring, what’s new about that?</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/264a5dd1dd204ecb9cde8d105f2eddd1%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>❤：Hi Yeon! Refactoring is the process of improving the design of existing code to make it more understandable and easier to maintain, without changing its functionality. Think of it as giving your code a beauty makeover, but on the inside it’s still the same code, not an “unrecognizable person”.</p><h4 id="Why-Refactor"><a href="#Why-Refactor" class="headerlink" title="Why Refactor"></a>Why Refactor</h4><p>Lulu: Wow, that sounds awesome, why do we need to refactor?</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e06b300927364a3eb074e56e16bddda3%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>❤：Haha, good question, Lulu! Because code is alive and getting bigger day by day, when it becomes difficult to understand and modify, it is like a heavy elephant that slows down our pace. Refactoring is like putting weight on the elephant, making it lighter and more flexible, and development speed can be increased quite a bit!</p><p>In the same way that you guys have a little cleanliness fetish and love to clean up your room, programmers with code cleanliness fetishes refactor their code all the time!</p><h4 id="When-to-refactor"><a href="#When-to-refactor" class="headerlink" title="When to refactor"></a>When to refactor</h4><p>Yeon: Sounds reasonable, but when should you use refactoring?</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/36907733ca3a42d692a515132d16313f%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>❤：Good question, Yeon! There are several scenarios:</p><ul><li>When you see several places in your code that look exactly the same, this is a good time to consider merging them into one to reduce redundancy.</li><li>When you have a function or method that looks thicker than a dictionary, break it up into smaller parts that are better understood.</li><li>When you want to fix a bug, but realize that the original code structure is so complex that fixing it becomes as hard as solving a puzzle, refactoring before fixing it is a good idea.</li><li>When you want to add new functionality, but the code doesn’t let you extend it easily, it’s also a good idea to refactor first and then extend.</li></ul><h4 id="Steps-to-refactoring"><a href="#Steps-to-refactoring" class="headerlink" title="Steps to refactoring"></a>Steps to refactoring</h4><p>Lulu: I see uncle, so what are the exact steps of refactoring?</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/3041751045e04a7d94b45053272acd98%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>❤：Good question, Lulu, it looks like you’re seriously thinking about it! Let me walk you through the basic steps of refactoring next!</p><h1 id="2-How-to-refactor"><a href="#2-How-to-refactor" class="headerlink" title="2. How to refactor"></a>2. How to refactor</h1><p>Before refactoring, we need to identify the bad-flavored code inside the code.</p><p>By bad flavor, I mean the superficial messiness of the code, and the deep-seated corruption. Simply put, it’s code that just doesn’t feel right.</p><h2 id="2-1-Bad-flavored-code"><a href="#2-1-Bad-flavored-code" class="headerlink" title="2.1 Bad-flavored code"></a>2.1 Bad-flavored code</h2><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/11a13edcea64423e9e1f9d409a0d8a6f%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>In the book Refactoring-Improving the Design of Existing Code, these two dozen bad taste situations are described, and we’ll pick the most common ones below.</p><h3 id="1-Methods-that-are-too-long"><a href="#1-Methods-that-are-too-long" class="headerlink" title="1) Methods that are too long"></a>1) Methods that are too long</h3><p>Methods that are too long are those that do too much work in a single method, often accompanied by statements in the method that are not at the same abstraction level, e.g., a mix of dto and service level code, i.e., fragmented logic.</p><p>In addition to this, methods that are too long tend to cause a number of additional problems.</p><h4 id="Problem-1-Excessive-annotations"><a href="#Problem-1-Excessive-annotations" class="headerlink" title="Problem 1: Excessive annotations"></a>Problem 1: Excessive annotations</h4><p>Methods that are too long are difficult to understand and require a lot of comments. If 10 lines of code require 20 lines of comments, the code is hard to read. If 10 lines of code require 20 lines of comments, the code is hard to read. Especially when reading code, you often need to remember a lot of context.</p><h4 id="Problem-2-Procedure-Oriented"><a href="#Problem-2-Procedure-Oriented" class="headerlink" title="Problem 2: Procedure-Oriented"></a>Problem 2: Procedure-Oriented</h4><p>The problem with process-oriented code is that when the logic is complex, the code can be difficult to maintain.</p><p>On the contrary, we often use object-oriented design thinking in code development, i.e., abstracting things into objects with common characteristics.</p><h4 id="Solution-Ideas"><a href="#Solution-Ideas" class="headerlink" title="Solution Ideas"></a>Solution Ideas</h4><p>To solve the problem of long methods, we follow the principle that whenever we feel the need to write a comment to explain the code, we write this part of the code into a separate method and name it according to the intent of this code.</p><blockquote><p>Method naming principle: it can summarize what to do, not how to do it.</p></blockquote><h3 id="2-Overly-large-classes"><a href="#2-Overly-large-classes" class="headerlink" title="2) Overly large classes"></a>2) Overly large classes</h3><p>A class that does too many things, for example a class whose implementation contains both product logic and order logic. There will be too many instance variables and methods that are difficult to manage at creation time.</p><p>In addition to this, too large a class tends to cause two problems.</p><h4 id="Problem-1-Redundancy-and-duplication"><a href="#Problem-1-Redundancy-and-duplication" class="headerlink" title="Problem 1: Redundancy and duplication"></a>Problem 1: Redundancy and duplication</h4><p>When a class contains the logic of two modules inside, the two modules are prone to create dependencies. This can easily lead to the problem of “you take me, I look at you” during the code writing process.</p><p>That is, in both modules, the program structure or methods with the same intent are seen related to the other module.</p><h4 id="Problem-2-Poor-coupling-structure"><a href="#Problem-2-Poor-coupling-structure" class="headerlink" title="Problem 2: Poor coupling structure"></a>Problem 2: Poor coupling structure</h4><p>When the naming of a class is insufficient to describe what it is doing, the probability is that the coupling is poorly structured, which is contrary to the goal of writing code with “high cohesion and low coupling”.</p><h4 id="Solution"><a href="#Solution" class="headerlink" title="Solution"></a>Solution</h4><p>Split large classes into small classes based on business logic, and if there is a dependency between two classes, associate them through foreign keys and so on. When duplicate code occurs, try to merge and present it as much as possible, and the program will become more concise and maintainable.</p><h3 id="3-Logical-decentralization"><a href="#3-Logical-decentralization" class="headerlink" title="3) Logical decentralization"></a>3) Logical decentralization</h3><p>Logical fragmentation is due to unreasonable dependencies at the code architecture level or object level, which usually leads to two problems:</p><h4 id="Fragmentation"><a href="#Fragmentation" class="headerlink" title="Fragmentation"></a>Fragmentation</h4><p>A class is often modified in different directions for different reasons.</p><h4 id="Scattered-changes"><a href="#Scattered-changes" class="headerlink" title="Scattered changes"></a>Scattered changes</h4><p>When a certain change occurs, it needs to be modified in multiple classes.</p><h3 id="4-Other-bad-flavors"><a href="#4-Other-bad-flavors" class="headerlink" title="4) Other bad flavors"></a>4) Other bad flavors</h3><h4 id="Data-Mud-Clusters"><a href="#Data-Mud-Clusters" class="headerlink" title="Data Mud Clusters"></a>Data Mud Clusters</h4><p>A data mudball is a confusing blend of many data items that are not easily reused and extended.</p><p>It is easier to categorize many data items when they always appear together and when they appear together. We can then consider encapsulating data into data objects by business. The reverse example is below:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">AddUser</span><span class="params">(age <span class="type">int</span>, gender, firstName, lastName <span class="type">string</span>)</span></span> &#123;&#125;</span><br></pre></td></tr></table></figure><p>After the refactoring:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> AddUserRequest <span class="keyword">struct</span> &#123;</span><br><span class="line">   Age <span class="type">int</span></span><br><span class="line">   Gender <span class="type">string</span></span><br><span class="line">   FirstName <span class="type">string</span></span><br><span class="line">   LastName <span class="type">string</span></span><br><span class="line">&#125;</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">AddUser</span><span class="params">(req AddUserRequest)</span></span> &#123;&#125;</span><br></pre></td></tr></table></figure><h4 id="Basic-Type-Paranoia"><a href="#Basic-Type-Paranoia" class="headerlink" title="Basic Type Paranoia"></a>Basic Type Paranoia</h4><p>In most high-level programming languages, there are basic types and structural types. In Go, basic types are int, string, bool, and so on.</p><p>Basic type paranoia refers to the fact that when defining variables for an object, we often use the basic type without considering the actual business meaning of the variable.</p><p>An example is the following:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> QueryMessage <span class="keyword">struct</span> &#123;</span><br><span class="line">Role        <span class="type">int</span>         <span class="string">`json:&quot;role&quot;`</span></span><br><span class="line">Content  <span class="type">string</span>    <span class="string">`json:&quot;content&quot;`</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>After the refactoring:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">type</span> MessageRole <span class="type">int</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line">HUMAN     MessageRole = <span class="number">0</span></span><br><span class="line">ASSISTANT MessageRole = <span class="number">1</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> QueryMessage <span class="keyword">struct</span> &#123;</span><br><span class="line">Role        MessageRole   <span class="string">`json:&quot;role&quot;`</span></span><br><span class="line">Content  <span class="type">string</span>               <span class="string">`json:&quot;content&quot;`</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><ul><li><p>This is the request field when ChatGPT asks a question, we can see that the conversation role is of type int, and 0 means human, 1 means chat assistant.</p><p>When we use int to represent the conversation role, there is no way to know more information directly from the definition.</p><p>But by defining it with <code>type MessageRole int</code>, we can clearly see that there are two types of conversation roles based on constant values: HUMAN &amp; ASSISTANT.</p><h4 id="Confusing-Hierarchical-Calls"><a href="#Confusing-Hierarchical-Calls" class="headerlink" title="Confusing Hierarchical Calls"></a>Confusing Hierarchical Calls</h4><p>Our general system is hierarchical based on business service, transit controller and database access dao. Generally, the controller calls the service and the service calls the dao.</p><p>If we call the dao directly from the controller, or if the dao calls the controller, there will be a hierarchical confusion, which can be optimized.</p><h3 id="5-Problems-caused-by-bad-flavors"><a href="#5-Problems-caused-by-bad-flavors" class="headerlink" title="5) Problems caused by bad flavors"></a>5) Problems caused by bad flavors</h3><p>YanYan: Uncle, do all these bad flavors need to be addressed, and what kind of impact do you think these bad flavors of code will bring?</p><p>❤: Yes, code with too many bad flavor codes will bring <strong>four “difficult “</strong> .</p><ul><li><p><strong>Difficult to understand</strong>: new developer students can’t understand the code of the people watching, a module after two weeks of reading still don’t know what it means. Maybe it’s not the level of the developer is not enough, may be the code is written too hard to say.</p></li><li><ul><li><ul><li>difficult to reuse * * *: either read can not read, or barely read but do not dare to use, for fear of what dark pit. Or the system is so coupled that it’s hard to separate the reusable parts.</li></ul></li></ul></li><li><p><strong>Difficult to change</strong>: affects the whole body, i.e., scattershot modification. Move one piece of code and the whole module is almost gone.</p></li><li><p><strong>Difficult to test</strong>: change is not good to test, it is difficult to carry out functional verification. The naming is messy and the structure is confusing, and new problems may be measured during testing.</p></li></ul><h1 id="3-Refactoring-Tips"><a href="#3-Refactoring-Tips" class="headerlink" title="3. Refactoring Tips"></a>3. Refactoring Tips</h1><p>Lulu: Oh, so that’s it, can we remove them then?</p><p>❤: Of course you can! Just like you guys love to clean up your room, every responsible (code cleanliness) programmer will consider code refactoring.</p><p>And the industry already has a better way of thinking about the refactoring problem: removing the “bad taste” from the code through continuous refactoring.</p><h3 id="1-Naming-conventions"><a href="#1-Naming-conventions" class="headerlink" title="1) Naming conventions"></a>1) Naming conventions</h3><p>A good naming convention should conform to:</p><ul><li>Accurately describe what is done</li><li>Formatting that conforms to common conventions</li></ul><h4 id="common-conventions"><a href="#common-conventions" class="headerlink" title="common conventions"></a>common conventions</h4><p>Let’s take Huawei’s internal Go language development specification as an example:</p><p>Scenario Constraints Example Project names are all lowercase, separated by an underscore ‘-‘ for multiple words user-order Package names are all lowercase, separated by an underscore ‘-‘ for multiple words config-sit Structure names are initialized in uppercase Student interfaces use Restful API naming, with the last portion of the path being a resource noun such as [get] api&#x2F;v1&#x2F; student constant name initial capitalization, camel naming CacheExpiredTime variable name initial lowercase, camel naming userName, password</p><h3 id="2-Refactoring-techniques"><a href="#2-Refactoring-techniques" class="headerlink" title="2) Refactoring techniques"></a>2) Refactoring techniques</h3><p>YanYan: Wow, so many mature specifications can be used ah! So besides the specification, do we need to pay attention to anything else?</p><p>❤: Good question Yeon! Next I will also introduce some common refactoring techniques:</p><ul><li><strong>Extract function</strong>: break a long function into smaller chunks, easier to understand and reuse.</li><li><strong>Rename</strong>: rename variables, functions, classes, etc. to make more sense.</li><li><strong>eliminate redundancy</strong>: find similar chunks of code and merge them to reduce duplication.</li><li><strong>Move</strong>: move functions or fields to more appropriate places to make the code more organized.</li><li><strong>Abstract generic classes</strong>: pull out generic functions into a class to increase code reusability.</li><li><strong>Introduce parameter objects</strong>: when there are too many variables, pass in objects to eliminate data clumps.</li><li><strong>Use the guard statement</strong>: reduce the use of else and make the code structure clearer.</li></ul></li></ul><h1 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h1><p>Lulu: Uncle, you’re so funny, I feel like I can refactor too!</p><p>❤: Lulu is awesome, I believe in you! The idea of refactoring is everywhere, just like all life should be white space, your life will be very wonderful too. In programming, refactoring can make the code more beautiful, easier to read and understand, and improve the development efficiency, which is a skill that all programmers should master.</p><p>YanYan: I can do it, too! In the future, I will also write code and do code refactoring, and I will also like my uncle’s article.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/eb2bf78dae3443ffabec61ef6881e50d%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>❤：Hahaha, well da, you guys are great! Just like you guys like to clean and paint and read poetry, if you want to write code in the future, they’ll be very clean and poetic too!</p>]]></content>
    
    
    <summary type="html">A method that is too long is one that does too much work inside a method, often accompanied by statements in the method that are not at the same abstraction level, such as a mix of dto and service level code, i.e., the logic is scattered.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="absolutely" scheme="https://www.nablepart.com/tags/absolutely/"/>
    
    <category term="code specification" scheme="https://www.nablepart.com/tags/code-specification/"/>
    
    <category term="accompanied" scheme="https://www.nablepart.com/tags/accompanied/"/>
    
    <category term="abstraction" scheme="https://www.nablepart.com/tags/abstraction/"/>
    
    <category term="method" scheme="https://www.nablepart.com/tags/method/"/>
    
  </entry>
  
  <entry>
    <title>I heard you know architecture design? Come on, make a wechat group chat system</title>
    <link href="https://www.nablepart.com/17c9509aef09/"/>
    <id>https://www.nablepart.com/17c9509aef09/</id>
    <published>2023-11-06T11:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>As I was holding my cell phone the other day, chatting freely with my friends’ WeChat group about gossip news and upcoming weekend plans, suddenly a message with a joyful message came to me with eight big words written right in the middle of the message: <strong>Congratulations on your fortune, great luck</strong>.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2023-11-07_222844.png"></p><p>Grab the red envelopes! I’m sure most of you are no strangers to this, so how is this group chat system of WeChat designed to make it easy for us to chat, share pictures and emoticons, and that magical red packet feature?</p><p>This question has been bothering me for a long time, so I decided to dig a little deeper and see how the design behind WeChat’s group chat system works.</p><h3 id="WeChat-Group-Chat-System-Design"><a href="#WeChat-Group-Chat-System-Design" class="headerlink" title="WeChat Group Chat System Design"></a>WeChat Group Chat System Design</h3><p>WeChat, as a universal App with 1 billion users, must have been used by all of you. The WeChat group building function is a core ability inside WeChat, which can put hundreds of friends or strangers into a group space.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2023-11-07_222954.png"></p><p>Maybe you’ve experienced group chat on WeChat many times, but have you ever wondered how the system behind this is designed?</p><p>Let’s explore it today.</p><h2 id="2-System-Requirements"><a href="#2-System-Requirements" class="headerlink" title="2. System Requirements"></a>2. System Requirements</h2><h2 id="2-1-System-Features-and-Functional-Requirements"><a href="#2-1-System-Features-and-Functional-Requirements" class="headerlink" title="2.1 System Features and Functional Requirements"></a>2.1 System Features and Functional Requirements</h2><p>WeChat Group Chat is one of the core features of the social application, which allows users to create their own social circles to communicate with family members, friends, or enthusiasts of common interests in a friendly way.</p><p>The following are the core features of WeChat group chat system:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2e6101baee724a39986dfd722408992c%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ul><li><strong>Create Group Chat</strong>: Users can create new chat groups, invite other friend users to join or build groups with strangers face to face.</li><li><strong>Group Management</strong>: Group owners and administrators are able to manage group members, set rules and permissions.</li><li><strong>Message Sending and Receiving</strong>: Allows group members to send multiple types of messages such as text, image, audio, video, etc. and push them to all group members.</li><li><strong>Real-time communication</strong>: messages should be able to be delivered quickly to ensure real-time interaction.</li><li><strong>Red Packet Grabbing</strong>: Users can send any number and amount of red packets in the group chat, and group members can grab the red packets with random amount.</li></ul><h3 id="2-2-Non-functional-Requirements-Coping-with-High-Concurrency-High-Performance-and-Mass-Storage"><a href="#2-2-Non-functional-Requirements-Coping-with-High-Concurrency-High-Performance-and-Mass-Storage" class="headerlink" title="2.2 Non-functional Requirements: Coping with High Concurrency, High Performance, and Mass Storage"></a>2.2 Non-functional Requirements: Coping with High Concurrency, High Performance, and Mass Storage</h3><p>When we face the scenario that 1 billion WeChat users may use the group building function every day, we need to deal with large-scale user concurrency. This leads to the non-functional requirements of the system, including:</p><ul><li><strong>High Concurrency</strong>: the system needs to support a large number of users creating and using groups simultaneously to ensure a latency-free user experience.</li><li><strong>High performance</strong>: fast messaging and instant response are key to digital socialization.</li><li><strong>Massive Storage</strong>: The system must be scalable to accommodate massive amounts of user-generated message text, images, and audio&#x2F;video data.</li></ul><h2 id="3-Outline-Design"><a href="#3-Outline-Design" class="headerlink" title="3. Outline Design"></a>3. Outline Design</h2><p>In the outline design, we consider the outline design of the core components and basic business of the system.</p><h2 id="3-1-Core-Components"><a href="#3-1-Core-Components" class="headerlink" title="3.1 Core Components"></a>3.1 Core Components</h2><p>The following core components and protocols are involved in the WeChat group chat system.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/5d2d0557d4ee4c8ca4dacbe60c74e2f2%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><hr><ul><li><p>Client **: Receive messages from WeChat group chat on cell phone or PC and transmit them to the backend server in real time.</p></li><li><p><strong>Websocket transfer protocol</strong>: supports real-time interaction between the client and the backend server, with low overhead and high real-time performance, commonly used in WeChat, QQ and other IM systems communication systems</p></li><li><p><strong>Long-connection cluster</strong>: a cluster of systems that make long Websocket connections with clients and forward messages to application servers through middleware.</p></li><li><p><strong>Message Processing Server Cluster</strong>: provides real-time message processing capability, including data storage, query, and interaction with the database.</p></li><li><p><strong>Message Push Server Cluster</strong>: this is the relay station for messages and is responsible for delivering messages to the correct cluster members</p></li><li><p><strong>Database server cluster</strong>: it is used to store user text data, thumbnails of images, audio and video metadata, etc.</p></li><li><p><strong>Distributed file storage cluster</strong>: for storing user’s pictures, audio and video files.</p></li></ul><h3 id="3-2-Business-Outline-Design"><a href="#3-2-Business-Outline-Design" class="headerlink" title="3.2 Business Outline Design"></a>3.2 Business Outline Design</h3><h4 id="Group-Chat-Creation"><a href="#Group-Chat-Creation" class="headerlink" title="Group Chat Creation"></a>Group Chat Creation</h4><ul><li><strong>Unique ID Assignment</strong>: When a user requests to create a new group, the system generates a unique group ID, which can be generated by a distributed ID generator such as Snowflake, or by using a database incremental ID. here, we adopt MySQL’s incremental ID for the sake of simplicity.</li><li><strong>Group Information Storage</strong>: Stores group IDs and related information (e.g. group name, creator ID, etc.) in the group database.</li><li><strong>Member association</strong>: add the group owner as the founding member of the group, and the creator will also become the administrator.</li><li><strong>Message History</strong>: To ensure that new members can access previous messages, the group ID of this new group is stored in association with user messages.</li></ul><p>In addition to pulling friends to build groups, WeChat has also implemented the ability to build groups face-to-face.</p><h2 id="4-Face-to-face-group-building"><a href="#4-Face-to-face-group-building" class="headerlink" title="4. Face-to-face group building"></a>4. Face-to-face group building</h2><p>The user initiates a face-to-face group creation and inputs a 4-digit random code, and users around the group can join the group chat after inputting the random code. The face-to-face group creation function usually involves the following data table design and core business interaction processes.</p><h3 id="4-1-Database-table-design"><a href="#4-1-Database-table-design" class="headerlink" title="4.1 Database table design"></a>4.1 Database table design</h3><ol><li><strong>User table</strong>: store user information, including user ID, nickname, avatar, etc. 2. <strong>Group table</strong>: store user information, including user ID, nickname, avatar, etc. 3.</li><li><strong>Group table</strong>: store group information, including group ID, group name, creator ID, number of group members, etc. 3. <strong>GroupMember table</strong>: store group information, including group ID, group name, creator ID, number of group members, etc. 4.</li><li><strong>GroupMember table</strong>: associated with user and group, including user ID and group ID. 4. <strong>RandomCode table</strong>: associated with user and group, including user ID and group ID. 5.</li><li><strong>RandomCode table</strong>: stores the random code for face-to-face group creation and the associated group ID.</li></ol><h3 id="4-2-Core-Business-Interaction-Flow"><a href="#4-2-Core-Business-Interaction-Flow" class="headerlink" title="4.2 Core Business Interaction Flow"></a>4.2 Core Business Interaction Flow</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f6498bf859a14790aa6f618f7c46dea2%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>User A initiates a face-to-face group in the mobile application, enters a random code, passes the verification, and waits for users in the surrounding area (within 50 meters) to join. At this time, the system stores the user information in the cache as a <code>HashMap</code> and sets the expiration time to <code>3min</code>.</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">&#123;random <span class="selector-tag">code</span>, user list <span class="selector-attr">[User A (ID, name, avatar)]</span>&#125;</span><br></pre></td></tr></table></figure><p>User B initiates a face-to-face group creation on another cell phone, inputs the specified random code, <strong>if there is such a random code around the user, the user enters the same group chat waiting page, and can see the avatars and nicknames of other group members</strong>.</p><p>At this point, in addition to obtaining all user information based on the random code, the system will also update the user information in the cache in real time.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/bd1da07dc8884f3fb9466255b13b48b4%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>When the first user clicks <strong>Enter the group</strong> to join the group chat, the system stores the generated random code in the <code>RandomCode</code> table and associates it with the newly created group ID to update the number of group members.</p><p>Then, the system stores the user information and the newly generated group chat information in the <code>Group&amp;#x3001;GroupMember</code> table</p><h4 id="member-joins-and-refreshes-the-group-member-information"><a href="#member-joins-and-refreshes-the-group-member-information" class="headerlink" title="member joins and refreshes the group member information"></a>member joins and refreshes the group member information</h4><p>Later, when user B and C join the group chat with the random code, the mobile client sends a request to the server backend to verify whether the random code is valid. The server backend verifies the random code and checks whether the random code exists in the cache and whether it is within the validity period.</p><p>Then, it determines whether the current group members are full or not (at present, the maximum number of group chats created by ordinary users is 500). If the validation passes, the server back-end adds users B and C to the group member table <code>GroupMember</code> and returns a successful response.</p><p>When the mobile client application receives the success response, it updates the list of group chats for users B and C to show the new group chats they have joined.</p><h4 id="Other-Technical-Components"><a href="#Other-Technical-Components" class="headerlink" title="Other Technical Components"></a>Other Technical Components</h4><p>In this way, user A successfully creates a face-to-face group by creating a random code and scanning the QR code by the surrounding users. This feature involves several technical components, including distributed caching, database, QR code generation and validation.</p><p>Meanwhile, a quite important capability in the face-to-face group building process is to identify the user’s area, e.g., within 50 meters. This can be done by using <strong>Redis’ GeoHash algorithm to get information about all users within a range</strong>.</p><p>Due to space limitations, here does not expand the details, and want to understand more and QR code generation and location algorithm of the details, you can see my previous article: [I heard that you will be architectural design? Come on, get a bus &amp; subway ride system]. </p><h2 id="5-Message-sending-and-receiving"><a href="#5-Message-sending-and-receiving" class="headerlink" title="5. Message sending and receiving"></a>5. Message sending and receiving</h2><p>When a member speaks in a WeChat group, the system needs to ** handle the distribution of the message, notify other members, and ensure that the message is displayed**. The following are the detailed interaction steps for this function, as well as the database storage scheme.</p><h3 id="5-1-Interaction-Flow"><a href="#5-1-Interaction-Flow" class="headerlink" title="5.1 Interaction Flow"></a>5.1 Interaction Flow</h3><p>The message sending and receiving timing diagram is shown below:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/914596ee246041e8b062f5d344f0ed08%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ol><li>User A sends a message with an image, video or audio to the group.</li><li>The mobile client application uploads the message content and media files to the server backend. 3.</li><li>The server backend receives the message and the media file, stores the message content in the Message table, and stores the media file in the distributed file storage cluster. **In the Message table, not only the MediaID of the media file is recorded to associate the message with the media, but also the thumbnail image, video cover image, and so on.</li><li>The server backend broadcasts the message to all cluster members. The mobile client application receives the message and loads the corresponding presentation based on the message type (text, image, video, audio).</li><li>when the user clicks to view the image, video or audio thumbnail, the client application fetches the corresponding media file path from the object storage cluster based on the <code>MediaID</code> and displays it to the user.</li></ol><p>This process ensures that messages and media files are stored and displayed efficiently. Users can upload and view various types of media data, while the server backend realizes effective message storage and presentation by associating <code>Message</code> with information in the object storage server.</p><h3 id="5-2-Message-Storage-and-Presentation"><a href="#5-2-Message-Storage-and-Presentation" class="headerlink" title="5.2 Message Storage and Presentation"></a>5.2 Message Storage and Presentation</h3><p>Saving and displaying user’s image, video or audio data in WeChat Groups usually requires the design of data storage and display. In addition to the user table and group table mentioned in the face-to-face group building function above, the following table structures are needed:</p><ol><li><strong>Message table:</strong> Used to store messages, each message has a unique MessageID, message type (text, picture, video, audio), message content (text, picture thumbnail, video cover image, etc.), sender UserID, receiver group GroupID, send time and other fields.</li><li><strong>Media table:</strong> stores media data such as pictures, videos, audios uploaded by users. Each media file has a unique MediaID, file path, uploader UserID, upload time, and other fields. 3.</li><li><strong>MessageState table:</strong> Used to store user’s message state, including MessageID, UserID, whether it is read or not. When the message is pushed, the unread count is calculated through this table, and uniformly pushed to the user, and a small number representing the unread count of the message is displayed on the offline user’s phone.</li></ol><p>As we know, MySQL triggers a full table scan every time you query a <code>select count</code> type statement, so loading the unread count of messages is slow every time.</p><p>For query performance, we can store the user’s message count into Redis and record an unread value in real time. And, when the unread count is greater than 99, the unread count is set to 100 and will not be increased.</p><p>When pushing user messages, ** as long as the unread count is 100, set the number of pushed messages to <code>99+</code> to improve storage performance and interaction efficiency. **</p><h2 id="6-Grab-Red-Packet"><a href="#6-Grab-Red-Packet" class="headerlink" title="6. Grab Red Packet"></a>6. Grab Red Packet</h2><p>Grab Red Envelope allows users to send any number and amount of red envelopes in a group chat, and group members can grab red envelopes with random amounts, but it is necessary to ** ensure that the amount of red envelopes for each user is not less than $0.01**.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/97d9a56d3b2843d597763bf8e88b1431%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>The detailed interaction flow of grabbing red packets is as follows:</p><ol><li>the user receives the notification of grabbing the red packet, and clicks on the notification to open the group chat page.</li><li>The user clicks on the red packet, and the background service verifies the user’s eligibility to ensure that the user has not yet received the red packet.</li><li>If the user’s eligibility is verified, the background service allocates the red packet amount and stores the collection record.</li><li>The user sees the red packet amount in the WeChat group, and the red packet status is updated to “Claimed”.</li><li>Asynchronously call the payment interface to update the red packet amount to the wallet.</li></ol><p>The red packet function requires attention to the database design, real-time red packet grabbing and red packet distribution algorithm.</p><h3 id="6-1-Database-Design"><a href="#6-1-Database-Design" class="headerlink" title="6.1 Database Design"></a>6.1 Database Design</h3><p>The fields of the redpack table <code>redpack</code> are as follows:</p><ul><li>**id: ** primary key, redpack id</li><li>**totalAmount: ** total amount</li><li><strong>surplusAmount：</strong>* remaining amount</li><li>**total: ** total number of red packets</li><li><strong>surplusTotal：</strong>* Remaining red packets total amount</li><li><strong>userId：</strong>* The ID of the user who sent the red packets.</li></ul><p>This table is used to record how many red packets a user has sent and the remaining amount to be maintained.</p><p>The redpack record table <code>redpack_record</code> is as follows:</p><ul><li><strong>id:</strong> primary key, record ID</li><li>**redpackId: ** redpackId, foreign key</li><li>**userId: ** userId</li><li>**amount: ** amount grabbed</li></ul><p>The record table is used to store information about the specific redpacks grabbed by users, and is also a sub-table of the redpack table.</p><h3 id="6-2-Real-time"><a href="#6-2-Real-time" class="headerlink" title="6.2 Real-time"></a>6.2 Real-time</h3><h4 id="1-Send-red-packet"><a href="#1-Send-red-packet" class="headerlink" title="1. Send red packet"></a>1. Send red packet</h4><ol><li>After the user sets the total amount and number of red packets, add a data in the red packet table and start sending red packets.</li><li>In order to ensure the real-time performance and the efficiency of grabbing red envelopes, add a record in Redis, .</li><li>Grab the red envelope message pushed to all group members</li></ol><h4 id="2-Grab-Red-Envelope"><a href="#2-Grab-Red-Envelope" class="headerlink" title="2. Grab Red Envelope"></a>2. Grab Red Envelope</h4><p>Since 2015, WeChat Red Envelope’s grabbing red envelopes and splitting red envelopes have been separated, and users need to perform two operations after clicking on grabbing red envelopes. That’s why you can sometimes grab a red packet, but when you click on it, you’ll find that the red packet has already been claimed**.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0dd3c0ee8510446fa01d38ca3665f49d%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>The interaction steps of grabbing red envelopes are as follows. 1:</p><ol><li>Grab the red packets: the grabbing operation is done in the <code>Redis</code> cache layer, ** through the atomic decrement operation to update the number of red packets **, after reaching 0, it means that all the red packets have been grabbed.</li><li>opening red packets: when opening red packets, the first real-time calculation of the amount of money, generally through the <strong>two times the average method</strong> (i.e., between 0.01 and 2 times the remaining average). 3. red packet records: the user to get the red packet records, the user to get the red packet record.</li><li>Red Packet Records: After the user obtains the amount of the red packet, the number and amount that have been received are totaled through the transaction operation of the database, and the red packet table and record table are updated.</li><li>transfer: in order to improve efficiency, the final <strong>transfer is an asynchronous operation</strong>, which is why during the Chinese New Year, the red packet cannot be seen in the balance immediately after receiving it.</li></ol><h3 id="6-3-Red-Packet-Allocation-Algorithm"><a href="#6-3-Red-Packet-Allocation-Algorithm" class="headerlink" title="6.3 Red Packet Allocation Algorithm"></a>6.3 Red Packet Allocation Algorithm</h3><p>When the red packet amount is allocated, there are two implementation options: real-time splitting and pre-generation because it is randomly allocated.</p><h4 id="1-Real-time-splitting"><a href="#1-Real-time-splitting" class="headerlink" title="1. Real-time splitting"></a>1. Real-time splitting</h4><p>Real-time splitting refers to the process of real-time calculation of the amount of each red packet when ** grabbing red packets, in order to realize the process of red packet splitting.</p><p>This requires us to design a good splitting algorithm, so that the red packet splitting has been to ensure that the amount of the subsequent red packet to be split can not be empty.</p><p>When splitting in real time, it is not easy to do the split red packet amount obeys the <strong>normal distribution</strong> law.</p><h4 id="2-Pre-generation"><a href="#2-Pre-generation" class="headerlink" title="2. Pre-generation"></a>2. Pre-generation</h4><p>Pre-generation, refers to the red packets ** before ** have completed the red packet ** amount of split **, grab the red packet is only taken out in order to split the amount of red packets.</p><p>This way of splitting algorithm requirements are lower, you can split the red packet amount of randomness is very good, but usually need to be combined with the use of queues, and the need to design a table to store the split amount of red packets.</p><h4 id="3-Twice-the-mean-method"><a href="#3-Twice-the-mean-method" class="headerlink" title="3. Twice the mean method"></a>3. Twice the mean method</h4><p>Considering the advantages and disadvantages of the above, as well as the number of people in the WeChat group chat is not large (currently up to 500 people), so we use real-time splitting, using <strong>Double Mean Method</strong> to generate random red envelopes, only to meet the random can be, do not need to be normally distributed.</p><blockquote><p>Therefore, there may be a large difference between the red packets, but it is more exciting, isn’t it 🐶.</p></blockquote><p>Random numbers generated using the doubled mean method will have a random amount between <code>0.01 ~ 2</code> each time.</p><p>Assuming that the remaining amount of the current red packet is $10 and the number of remaining red packets is 5, <code>10/5 = 2</code>, then the amount of red packets that the current user can grab is: <code>0.01 ~ 4</code> yuan.</p><h4 id="4-Algorithm-Optimization"><a href="#4-Algorithm-Optimization" class="headerlink" title="4. Algorithm Optimization"></a>4. Algorithm Optimization</h4><p>Although the random red packets generated by the twofold mean method are close to the average, I have previously seen a similar statement on a forum: <strong>The randomness of the WeChat red packet amount has a relationship with the timing of receiving it, especially if the amount is not high</strong>.</p><p>So, Xiao ❤ spent a huge amount of money to send multiple red envelopes in the WeChat group, and came to the conclusion that after sending 4 red envelopes totaling 0.05, the amount of red envelopes received by the last person must be <code>0.02</code>.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c2a71bd0cf994b088b5e6367855f2553%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>No exceptions:</p><p><img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/fa4ffe2ce88a4361ac9e7b1cc562a13d~tplv-k3u1fbpfcp-jj-mark:3024:0:0:0:q75.awebp#?w=1080&h=712&s=164056&e=png&b=fdfdfd"></p><p>So, the probability is that the red packet amount algorithm is not randomly assigned, but has been processed before handing out the red packets. For example, before the red packet amount is generated, it is first generated into a non-existent red packet, which totals </p><p>And when the red packet amount is assigned, <code>0.01</code> is added to the random value base of each red packet as a way to ensure that the minimum value of each red packet is not 0.</p><p>So, suppose the user sends out 3 red envelopes with a total amount of 0.04, he needs to first extract <code>3*0.01</code> to the “fourth” non-existent red envelope, so the random value of the red envelope grabbed by the first person is <code>0 ~ (0.04-3*0.01)/3</code>.</p><p>The quotient of the divisor is taken down to two decimals, <code>0 ~ (0.04-3*0.01)/3 ==&gt; (0 ~ 0) = 0</code>, plus the previously extracted guaranteed value of <code>0.01</code>, because of the concern about red packet overruns, so the first two grabbed red packet amounts are both <code>0.01</code>. The amount of the last red packet is the red packet balance, i.e. <code>0.02</code>.</p><p>The algorithm logic is implemented in Go as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> (</span><br><span class="line">   <span class="string">&quot;fmt&quot;</span></span><br><span class="line">   <span class="string">&quot;math&quot;</span></span><br><span class="line">   <span class="string">&quot;math/rand&quot;</span></span><br><span class="line">   <span class="string">&quot;strconv&quot;</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> RedPack <span class="keyword">struct</span> &#123;</span><br><span class="line"></span><br><span class="line">    SurplusAmount <span class="type">float64</span> </span><br><span class="line"></span><br><span class="line">    SurplusTotal <span class="type">int</span> </span><br><span class="line"></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">remainTwoDecimal</span><span class="params">(num <span class="type">float64</span>)</span></span> <span class="type">float64</span> &#123;</span><br><span class="line"></span><br><span class="line">    numStr := strconv.FormatFloat(num, <span class="string">&#x27;f&#x27;</span>, <span class="number">2</span>, <span class="number">64</span>)</span><br><span class="line"></span><br><span class="line">    num, _ = strconv.ParseFloat(numStr, <span class="number">64</span>)</span><br><span class="line">    <span class="keyword">return</span> num</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">getRandomRedPack</span><span class="params">(rp *RedPack)</span></span> <span class="type">float64</span> &#123;</span><br><span class="line">    <span class="keyword">if</span> rp.SurplusTotal <span class="number">0</span> &#123;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">return</span> <span class="number">0</span></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span> rp.SurplusTotal == <span class="number">1</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> remainTwoDecimal(rp.SurplusAmount + <span class="number">0.01</span>)</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    avgAmount := math.Floor(<span class="number">100</span>*(rp.SurplusAmount/<span class="type">float64</span>(rp.SurplusTotal))) / <span class="type">float64</span>(<span class="number">100</span>)</span><br><span class="line">    avgAmount = remainTwoDecimal(avgAmount)</span><br><span class="line"></span><br><span class="line">    rand.NewSource(time.Now().UnixNano())</span><br><span class="line"></span><br><span class="line">    <span class="keyword">var</span> max <span class="type">float64</span></span><br><span class="line">    <span class="keyword">if</span> avgAmount &gt; <span class="number">0</span> &#123;</span><br><span class="line">        max = <span class="number">2</span>*avgAmount - <span class="number">0.01</span></span><br><span class="line">    &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">        max = <span class="number">0</span></span><br><span class="line">    &#125;</span><br><span class="line">    money := remainTwoDecimal(rand.Float64()*(max) + <span class="number">0.01</span>)</span><br><span class="line"></span><br><span class="line">    rp.SurplusTotal -= <span class="number">1</span></span><br><span class="line">    rp.SurplusAmount = remainTwoDecimal(rp.SurplusAmount + <span class="number">0.01</span> - money)</span><br><span class="line"></span><br><span class="line">    <span class="keyword">return</span> money</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    rp := &amp;RedPack&#123;</span><br><span class="line">        SurplusAmount: <span class="number">0.06</span>,</span><br><span class="line">        SurplusTotal:  <span class="number">5</span>,</span><br><span class="line"></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    rp.SurplusAmount -= <span class="number">0.01</span> * <span class="type">float64</span>(rp.SurplusTotal)</span><br><span class="line">    total := rp.SurplusTotal</span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i </span><br></pre></td></tr></table></figure><p>Print results:</p><blockquote><p>0.01、0.01、0.01、0.01、0.02</p></blockquote><p>As expected!</p><h2 id="7-Summary"><a href="#7-Summary" class="headerlink" title="7. Summary"></a>7. Summary</h2><p>Behind WeChat’s group chat and red packet grabbing features lie complex interaction techniques and well-designed product experiences. Through these core components, database tables, and detailed interaction processes, users are able to easily participate and enjoy the convenience of the group chat system.</p><p>And, the addition of these fun-filled features is one of the reasons why WeChat has so many users, right?</p><p>The system design of WeChat’s group building feature is not just a display of technological magnificence, it’s part of the magic of digital socialization.</p>]]></content>
    
    
    <summary type="html">Grab the red envelopes! I&#39;m sure most of you are no strangers to this, so how is this group chat system of WeChat designed to make it easy for us to chat, share pictures and emoticons, and that magical Red Packet feature?？</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="Architecture" scheme="https://www.nablepart.com/tags/Architecture/"/>
    
    <category term="strangers" scheme="https://www.nablepart.com/tags/strangers/"/>
    
    <category term="WeChat" scheme="https://www.nablepart.com/tags/WeChat/"/>
    
  </entry>
  
  <entry>
    <title>I hear you&#39;ve studied architecture? Come on, make a short chain system.</title>
    <link href="https://www.nablepart.com/4ba04ba90618/"/>
    <id>https://www.nablepart.com/4ba04ba90618/</id>
    <published>2023-11-06T10:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Introduction</li><li>Three Link Generation Methods</li><li>The Redirection Process</li><li>Cache Optimization</li><li>Designing for High Availability</li><li>Postscript</li></ol><h1 id="01-Introduction"><a href="#01-Introduction" class="headerlink" title="**01 Introduction"></a>**01 Introduction</h1><p>**1) Background</p><p>This is a system design question in my interview with the department of “Byte Shake”, the position is “back-end senior development engineer”, asked in the second interview. At the beginning, the interviewer smiled and asked me to introduce myself, and then talked about the project.</p><p>After talking about the project flawlessly and writing an algorithm question, the interviewer started to ask questions.</p><p>The interviewer began to ask: young man, resume inside wrote familiar with architectural design is not it, then you know program design ‘three high’ refers to what?</p><p>I thought to myself, that is not due to the programmer’s system is not reliable, the leadership is not the right person, every day overtime to change the BUG, resulting in a young age are high blood fat, high blood pressure and high blood sugar!</p><p>However, since this is an interview, the leader is certainly not willing to listen to this, so I answered: program three high, that is, the system design needs to consider the high concurrency, high performance and high availability:</p><ul><li>High concurrency is in the process of system development, the need to ensure that the system can simultaneously process many requests in parallel;</li><li>High performance means that the program needs to take up as little memory and CPU as possible and process requests as fast as possible;</li><li>High Availability usually describes that the system is not serviceable for a short period of time, for example, no more than 31.5 seconds of downtime throughout the year, commonly known as 6 9s, which means that availability is guaranteed for 99.9999% of the time.</li></ul><p>So, the interviewer nodded slightly, thinking that the young man is not bad, since this can not be difficult for you, then I have to come up with a big trick, let’s have a system design questions!</p><p>**2) Statement of Requirements</p><p>As we all know, when business scenarios need to send the user a network address or QR code, due to the length of the address is relatively long, usually in order to take up fewer resources and improve the user experience. For example, the address of the Google search term “computer” is as follows:</p><figure class="highlight perl"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">https:<span class="regexp">//</span>www.google.com/search?<span class="keyword">q</span>=<span class="variable">%E</span>8%AE<span class="variable">%A</span>1<span class="variable">%E</span>7%AE<span class="variable">%9</span>7<span class="variable">%E</span>6%9C<span class="variable">%BA</span>&amp;ei=KNZ5Y7y4MpiW-AaI4LSACw&amp;ved=0ahUKEwi87MGgnbz7AhUYC94KHQgwDbAQ4dUDCBA&amp;uact=<span class="number">5</span>&amp;oq=<span class="variable">%E</span>8%AE<span class="variable">%A</span>1<span class="variable">%E</span>7%AE<span class="variable">%9</span>7<span class="variable">%E</span>6%9C<span class="variable">%BA</span>&amp;gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIECAAQQzIFCAAQgAQyBQgAEIAEMgUIABCABDIFCC4QgAQyBQgAEIAEMgUIABCABDIFCAAQgAQyBQgAEIAEMgUIABCABDoKCAAQRxDWBBCwAzoLCC4QgAQQxwEQ0QM6FggAEOoCELQCEIoDELcDENQDEOUCGAE6BwguENQCEENKBAhBGABKBAhGGABQpBZYzSVglydoA3ABeACAAZ0DiAGdD5IBCTAuNy4xLjAuMZgBAKABAbABCsgBCsABAdoBBAgBGAc&amp;sclient=gws-wiz-serp</span><br></pre></td></tr></table></figure><p>Obviously, it is not “decent” to send this long list of URLs to users. Moreover, when encountering some systems with word limits, such as microblogging posts with word limits, it is definitely not possible to send such a long link address. In general, most of the SMS links are short links:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f07b8ac9218e47d3b1b878fbba7a6911%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>So, in order to improve the user experience, as well as the daily business needs. We need to design a short link generation system, in addition to the realization of business functions, we have to serve the national network address. With such a large number of users, how should the data be stored and how should high concurrency be handled?</p><h1 id="02-Three-methods-of-link-generation"><a href="#02-Three-methods-of-link-generation" class="headerlink" title="02 Three methods of link generation"></a><strong>02 Three methods of link generation</strong></h1><p><strong>1) Requirements Analysis</strong></p><p>I thought to myself, this interviewer looks “kind eyes” and smiling, but the topic is not simple, this type of system needs to be considered too many points, absolutely can not be taken lightly.</p><p>So, I was from the link generation, URL access, cache optimization and high availability of four aspects of the design.</p><p>First of all, the generation of short link address, you can consider using UUID or self-incrementing ID. for each long link to short link address, must generate a globally unique short link value, otherwise there will be a conflict. So, short links are characterized by:</p><ul><li>The amount of data storage is large, the national URL is at least a million short link addresses need to be generated every day;</li><li>Concurrency is not small, encountered at the same time to access the system, according to 3600 seconds a day to calculate, the average number of requests per second at least thousands;</li><li>short links can not be repeated, otherwise it will cause data access conflicts.</li></ul><p>**2) Snowflake Algorithm</p><p>First of all, to generate short links, you can use the snowflake algorithm + hash to achieve.<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2023-11-07_221259.png"><br>The snowflake algorithm is a unique number generated in a distributed scenario based on timestamps, different machine IDs, and sequence numbers. Its advantage is that it is simple and easy to use as you go.</p><p>Through the snowflake algorithm to get the unique number, and then hash mapping, the number will be converted to a random string, if the short chain string is longer, you can directly take the first 6 bits. However, because the result of hash mapping may conflict, so the hash algorithm is more demanding.</p><p><strong>2) 62 Progressive Number Generation of Short Links</strong></p><p>In addition to the snowflake algorithm, you can also use the number of 62 (A-Za-z0-9) to generate short link addresses. First get a self-incrementing ID, and then convert this value to a string of hexadecimal (a-zA-Z0-9), a billion digits converted to just five or six (100 million -&gt; zAL6e).</p><p>The short link server domain name, and this string for splicing, you can get the URL of the short link, such as: t.cn&#x2F;zAL6e.</p><p>And generating the self-incrementing ID needs to consider the performance impact and concurrency security, so we can make an issuer through Redis’ incr command, which is an atomic operation, so we don’t have to worry about the security of the numbers. And Redis is an in-memory operation, so it is quite efficient.</p><p><strong>3) Random number + Bloom filter</strong></p><p>In addition to incrementing the ID, we can also generate random numbers and then convert to hexadecimal to generate short links. However, since the random numbers may be duplicated, we need to use Bloom filters to de-duplicate them.</p><p>The Bloom filter is a cleverly designed data structure that works by hashing a value multiple times, mapping it to different bit bits and recording it. When the new value is used, the same hash function is used to compare whether there is a value on each bit bit: if there is no value on any of these bit bits, the number is unique; otherwise, it may not be unique.</p><p>Of course, this may produce a misjudgment, ** Bloom filter can certainly find duplicate values, but ** ** may also be non-duplicated values judged as duplicates **, the misjudgment rate of roughly 0.05%, is an acceptable range, and the Bloom filter is extremely efficient.</p><p>Therefore, through the Bloom filter, we can determine whether the generated random number is duplicated or not: if it is duplicated, a new one will be generated; if it is not duplicated, it will be deposited into the Bloom filter and the database, thus guaranteeing that the random number fetched each time is unique.</p><p>**4) Store the short link to the database **</p><p>When storing the database, different databases may be chosen because of the amount of inventory and technology stack. However, since MySQL is used more in the company department and the current topic does not mention technology selection, we choose MySQL as the persistent database.</p><p>Whenever a short link is generated, the mapping relationship from short link to long link needs to be stored in MySQL with a unique index, i.e. zAL6e -&gt; real URL.</p><ul><li><h1 id="03-Redirection-process"><a href="#03-Redirection-process" class="headerlink" title="03 Redirection process"></a><strong>03 Redirection process</strong></h1><p>When the browser accesses the short link service, it fetches the original URL based on the short link address and then redirects the URL. We usually have two redirection methods:</p><ul><li>One is to return a 301 response code to the browser for permanent redirection, so that it can directly access the real URL address in the future;</li><li>One is a 302 temporary redirection, so that the browser currently this time to visit the real URL, but subsequent requests are still based on the short chain address access.</li></ul><p>Although the 301 browser only one request, the follow-up can be directly from the browser to get a long link, this method can improve access speed, but it can not count the number of visits to the short link.</p><p>So according to the business needs, we usually choose 302 redirection.</p><h1 id="04-Cache-Design"><a href="#04-Cache-Design" class="headerlink" title="04 Cache Design"></a><strong>04 Cache Design</strong></h1><p>Since the short link is distributed to multiple users, it may be accessed multiple times in a short period of time, so the long link can be put into redis cache after writing&#x2F;fetching it from MySQL.</p><p><strong>1) Add to cache</strong></p><p>And, the correspondence between short and long links is usually not modified frequently, so the consistency between the database and the cache is ensured by a simple <strong>Bypass Cache Mode</strong>:</p><ul><li>When reading (Read) data, if the cache is not hit, the DB is read first, and the data is taken out of the DB and put into the cache, while the response is returned;</li><li>When writing (Write) data, first update the DB, then delete the cache.</li></ul><p>When the user needs to generate a short link, first go to this mapping table to see if there is a corresponding short link address. If there is one, return it directly and increase the expiration time of the key-value by one hour; if not, regenerate it and store the corresponding relationship in the mapping table.</p><p>The cache elimination strategy can be chosen:</p><ul><li>LRU: Least Recently Used, the short-link address that has been read and written recently as hot data can always exist in the cache, eliminating those short-link keys that have not been accessed for a long time;</li><li>LFU: Least Frequently Used, the recent least frequently used algorithm, the recent short chain address with high access frequency can be used as hot data, eliminating those short chain keys with lower access frequency.</li></ul><ol start="2"><li>Cache penetration</li></ol><p>However, the use of cache can not prevent some anomalies, such as “cache penetration”. The so-called cache penetration is to query a cache and database do not exist in the short link, if the concurrency is very large, it will lead to all the cache does not exist in the request to hit the MySQL server, resulting in the server can not handle so many requests and blocked, or even crash.</p><p>Therefore, in order to prevent the miscreants from attacking the server in a similar way as “cache-penetration”, we can adopt two methods to deal with it:</p><ul><li>Cache the non-existing short link address, the key is the short link address, the value value is empty, the expiration time can be set to a shorter time;</li><li>Bloom filter will have a short link hash many times after the storage, when there is a short link request, first through the Bloom filter to determine whether the address exists in the database; if not, it means that the database does not exist in the address, it will be returned directly.</li></ul></li></ul><h1 id="05-High-Availability-Design"><a href="#05-High-Availability-Design" class="headerlink" title="05 High Availability Design"></a><strong>05 High Availability Design</strong></h1><p>Since caching and database persistence rely on Redis and MySQL, high availability of MySQL and Redis must be guaranteed.</p><ol><li>MySQL High Availability</li></ol><p>The MySQL database uses master-slave replication to separate reads and writes; the master node is used for writes and the slave node is used for reads, and it can be keptalived to achieve high availability.</p><p>The principle of Keepalived is to use virtual IP to detect multiple nodes in the portal, select a hot standby server as the master server, and assign it a virtual IP through which external requests can access the database.</p><p>At the same time, Keepalived detects the availability status of multiple nodes in real time, and when a server is found to be down or faulty, it will be kicked out of the cluster. If this server is the master, keepalived will trigger an election operation to elect another server from the server cluster to be the master and assign it the same virtual IP to complete the failover.</p><p>And, with Keepalived’s support, these operations don’t require human involvement, just the repair of the failed machine.</p><ol start="2"><li>Redis High Availability</li></ol><p>Because of the high concurrency of big data scenarios, write requests all fall on the master node of Redis, the pressure is too great. If you insist on increasing memory and CPUs in this vertical expansion, then a machine facing the gradual increase in disk IO, network and other pressures, which will also affect performance.</p><p>So Redis uses cluster mode to realize data sharding. Moreover, a sentinel mechanism is added to ensure the high availability of the cluster. Its basic principle is that the sentinel nodes monitor all the master and slave nodes in the cluster, when the master node is down or after a failure, the sentinel nodes will mark it as subjective offline; when enough sentinel nodes will mark the Redis master node as subjective offline, it will be changed to the state of the <strong>Objective offline</strong>.</p><p>At this point, the sentinel nodes elect a lead sentinel through an election mechanism to perform a failover operation on the Redis master node to guarantee the high availability of the Redis cluster, and this entire process requires no human intervention.</p><ol start="3"><li>System Fault Tolerance</li></ol><p>Before the service goes live, it needs to do a full assessment of the business volume, as well as performance testing. Do a good job of limiting the flow, fusion and service degradation logic, for example: the use of token bucket algorithm to achieve the limitation of the flow, hystrix framework to do the fusion, and will be commonly used configurations into the configuration center that can be hot update, so as to facilitate the pair of its real-time changes.</p><p>When the business volume is too large, the synchronous task is changed to asynchronous task processing. Through these service governance solutions to make the system more stable.</p><h1 id="06-Postscript"><a href="#06-Postscript" class="headerlink" title="06 Postscript"></a>06 Postscript</h1><p>When I finished the last word, the interviewer looked at me with admiration and doubt in his eyes. I thought that he should have been impressed by my performance, and that the interview should have been a sure thing.</p><p>However, surprisingly, the interviewer did not comment on the architectural design, but only looked at me and said, “That’s it for today’s interview, do you have anything you want to ask?”</p><p>Now, it was my turn to be shocked, so did I pass or not? But to give a sentence ah, so I asked, “through this interview, you think I have what aspects need to be improved?”</p><p>“Algorithms and projects need to be practiced more, but I found a good thing about you.” The interviewer laughed and then said, “You’re pretty good at memorizing eight-legged essays!”</p><p>My hanging heart finally dropped, and I thought to myself, “Oh, that’s steady~”</p>]]></content>
    
    
    <summary type="html">The snowflake algorithm is a unique number generated in a distributed scenario based on timestamps, different machine IDs, and sequence numbers. It has the advantage of being simple and easy to use on-the-fly</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="distributed" scheme="https://www.nablepart.com/tags/distributed/"/>
    
    <category term="machine" scheme="https://www.nablepart.com/tags/machine/"/>
    
    <category term="snowflake" scheme="https://www.nablepart.com/tags/snowflake/"/>
    
    <category term="advantage" scheme="https://www.nablepart.com/tags/advantage/"/>
    
    <category term="sequence" scheme="https://www.nablepart.com/tags/sequence/"/>
    
  </entry>
  
  <entry>
    <title>I hear you&#39;ve studied architecture? Come on, let&#39;s make a microblogging system.</title>
    <link href="https://www.nablepart.com/3e957ebaa672/"/>
    <id>https://www.nablepart.com/3e957ebaa672/</id>
    <published>2023-11-06T09:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Introduction</li><li>Requirements Analysis</li><li>Outline Design</li><li>Detailed Design</li><li>Publish&#x2F;Subscribe Issues</li><li>Postscript</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>When I interviewed for the “Baidu.com” department’s “Backend Senior Development Engineer” a year ago, I was quite impressed by one of the interview questions during the third interview.</p><p>I remember that the interviewer was very professional, asked a lot of project-related and thoughtful questions, and a few of my usual development process in the accumulation of high-quality BUG, also asked.</p><p>I was complacent, thinking that the test will be, answered all right, the interviewer began to make things difficult: “Since you think quite a lot on the project, then I will test you on a project design topic! First of all, these questions are not meant to be difficult for anyone, but just to see the depth and breadth of the candidate’s technical knowledge!”</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ee936e0c7c9b44a2957edd4274d6bf40~tplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>“Ah, right, right, right! It’s like this”, although the heart of a hundred apprehensions, it is true that the daily requirements are CRUD (add, delete, change, and check), the architectural design has not been done in the project for a long time, but still showed no panic, the color of a ready mind. </p><p>After all, the interview book said: half of the time in the interview are playing psychological warfare! As long as you do your homework, you can defend yourself confidently in the examination room, and we won’t be defeated by the interviewer’s one or two hour attack.</p><h2 id="2-Requirements-Analysis"><a href="#2-Requirements-Analysis" class="headerlink" title="2. Requirements Analysis"></a>2. Requirements Analysis</h2><p>The interviewer asked, “As a popular social APP, do you usually use Weibo? Can you tell me a few common functions of microblogging?”</p><p>Microblogging has not really been used much, but I just recently read a system design plan about microblogging, so I answered without panic: “The common functions of microblogging are brushing microblogging, posting and user attention, in addition, users can also like, comment, favorite and forward microblogging.”</p><p>“Good, then if you are now allowed to design a microblogging system, combined with the several core functions you just said, how would you design it? The system needs to be considered for high concurrency, high performance, and high availability.”</p><p>After getting the “Product Requirements”, I first built a use case diagram of the core functions in my mind as follows:</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/142331a6edd7461aaab12d7753ea7394%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Each requirement is described as follows:</p><ul><li>Brush microblogging: users open the microblogging homepage on the mobile APP side, displaying the most recent microblogs published by the friends they follow, sorted by the most recent time;</li><li>Tweeting: Users can post content up to 140 characters of text, which can contain pictures and videos;</li><li>Follow friends: users can follow other users, and the followers can see the information and number of followers.</li></ul><h2 id="3-Outline-Design"><a href="#3-Outline-Design" class="headerlink" title="3. Outline Design"></a>3. Outline Design</h2><p>The business functions of Weibo are not difficult to understand, but ** the concurrency and data volume** are very large:</p><ul><li>1 billion level of user volume, on average thousands of post counts per user, each user can follow thousands of friends;</li><li>high concurrency, the average page access of 100,000 level per second, the posting volume of 10,000 level per second;</li><li>Uneven distribution of users, the number of posts or the number of fans of some star users exceeds that of ordinary users by several orders of magnitude;</li><li>Uneven distribution of time, a user may suddenly become a hot user at a certain point in time, and its fans may also steeply increase by several orders of magnitude.</li></ul><p>It has the characteristics of a typical social system, which can be summarized into three points: <em>massive data, high access, non-uniformity</em>, and then from its most common follow friends, brush microblogging, posting three functions to do the outline design.</p><h3 id="3-1-Follow-Friends"><a href="#3-1-Follow-Friends" class="headerlink" title="3.1 Follow Friends"></a>3.1 Follow Friends</h3><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ea2fd5404d6946189851500db396fbe2%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>If you open the main page of a blogger in Weibo, the page contains the most basic features: Ta’s followers, Ta’s fans, and we can click to follow this blogger. From the blogger’s main page, we can also access his attention and fans sub-pages:</p><ul><li>Attention page, which displays information about all the users that the user follows.</li><li>Follower page, which shows all the fans of the user.</li></ul><p>In the above page, the user can follow a certain user, and can also delete fans, that is, cancel the attention of a certain other user, the business interaction of the function is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/aba2992222c3426dac1f38c19d81bc4c%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>When a user follows a friend, it needs to go through a load balancing server first, and then send the request to Weibo’s user server cluster, where it involves displaying the user’s information, so the user servers may access object storage servers for images and videos, as well as take out text data from Redis or MySQL.</p><p>Finally, if the user’s follow status is modified, the information is written to the Redis and MySQL clusters.</p><h3 id="3-2-Brush-Microblogging"><a href="#3-2-Brush-Microblogging" class="headerlink" title="3.2 Brush Microblogging"></a>3.2 Brush Microblogging</h3><p>The core of the microblogging system is to solve the problem of high concurrency. The overall deployment model of the system is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/194f99c43f634819b6ca1869883ab265%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>First, we use CDN (Content delivery network) to quickly return user requests, which is based on the principle of deploying servers in regional nodes that are widely used by users, and when a user accesses the system, the request is distributed to the nearest servers through the <strong>Global Load Technique</strong>, and they provide services directly to the user:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/baa62f4015ec4a09943f98bf23b65712%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>When CDN is not used, each user request will directly reach the system’s application server cluster, which will generate great traffic pressure on the system’s servers; when CDN is used, the user requests will be distributed to the nodes in the user’s neighborhood according to the target servers returned by the CDN load balancer.</p><p>The advantage of using CDN is that it can greatly <strong>avoid network congestion and make content transmission faster and more stable</strong>. CDN can be regarded as a system cache, which is especially suitable for microblogging scenarios where data is seldom changed, and under normal circumstances, CDN can filter out more than 90% of the requests and return data directly.</p><p>Under normal circumstances, CDNs can filter out more than 90% of the requests and return the data directly. Therefore, when a user accesses the Weibo system through a CDN, the vast majority of the requests will be hit by the CDN cache. In other words, more than 90% of bandwidth-consuming requests such as images and videos can be digested by the CDN. Requests that are not hit by the CDN will arrive at the reverse proxy server in the data center, which checks whether the local cache has the content needed for the request. If there is, it will return directly; if not, it will go to the distributed object storage cluster to get the relevant images and videos, or get the microblogging text content from the application server application.</p><p>When fetching content from the microblogging application server cluster, it will first retrieve the latest tweets posted by the current user’s friends from the Redis cache server and build a result page to return. If the number of tweets cached in Redis is less than 20, it will continue to look up the data from the MySQL database.</p><h3 id="4-3-Writing-Tweets"><a href="#4-3-Writing-Tweets" class="headerlink" title="4.3 Writing Tweets"></a>4.3 Writing Tweets</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/53f227389a3a44288c7c6e4817bec4b3%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Instead of CDNs and reverse proxies, tweets are written directly to the application server cluster via load-balanced servers. The application server writes the published tweets to the Redis cache cluster on one hand, and to the MySQL sharded database on the other hand.</p><p>Note that when we write to the database, <strong>if we write to the database directly, when a highly concurrent write request suddenly arrives, it may cause the database to overload, which may lead to system blocking or crashing</strong> (refer to the “Cache Avalanche” problem). Therefore, database write operations can be written to a message queue (e.g., Kafka cluster), and the message queue’s consumer program will consume messages from the message queue at a certain rate and write them to the database to ensure that the DB’s load pressure does not surge and cause abnormalities.</p><h2 id="4-Detailed-Design"><a href="#4-Detailed-Design" class="headerlink" title="4. Detailed Design"></a>4. Detailed Design</h2><h2 id="4-1-Table-Design"><a href="#4-1-Table-Design" class="headerlink" title="4.1 Table Design"></a>4.1 Table Design</h2><p>User, fan, follow and post tables:</p><ul><li>user table: primary key id, user information (name, avatar, registration time, v-authentication, cell phone number, etc.)</li><li>relation table: primary key id, followId, attentionId [fan and follower IDs]</li><li>post table: primary key id, userId, postTime[posting time (accurate to milliseconds will do)], content</li></ul><h4 id="Index-Optimization"><a href="#Index-Optimization" class="headerlink" title="Index Optimization"></a>Index Optimization</h4><p>post table can use combination index userId+postTime to query the recent posts of a user, here the combination index is used as a secondary index, so it needs to go back to the table. <strong>In order to reduce the number of times to go back to the table, we can splice the userId and timestamp bit postId as the primary key</strong>: for example, the first 20 bits as the timestamp is accurate to milliseconds + the userId, converted into a hexadecimal (0-9a-zA-Z) string as the postId, which also ensures that the primary key is incremental.</p><p>However, this will cause the index tree to take up more space, and the query is not as fast as a pure numeric primary key, so you can finally choose the most suitable primary key type according to the actual situation and compare the advantages and disadvantages of the two.</p><h3 id="4-2-Splitting-a-library-into-tables"><a href="#4-2-Splitting-a-library-into-tables" class="headerlink" title="4.2 Splitting a library into tables"></a>4.2 Splitting a library into tables</h3><p>When the data volume of follow, attention, and user tables exceeds 10 million or 100 million, the pressure of microblogging to read and write the database is very large, which is definitely not bearable for a single database. Therefore, the microblog DB needs to be a distributed database with sharded deployment.</p><h4 id="Vertical-Table-Splitting"><a href="#Vertical-Table-Splitting" class="headerlink" title="Vertical Table Splitting"></a>Vertical Table Splitting</h4><p>The above relation table [id, followId, attentionId] stores the information of followers and attention, when the number of users increases, there will be a problem, that is, it is not easy to choose the key for splitting the database and table:</p><ul><li>If followId is chosen as the hashKey, the query for the current user’s attention list will be on the same slice, but the query for all of the user’s followers will need to be on multiple slices;</li><li>If we choose attentionId as hashKey, we will be on the same slice when querying all followers of a user, but we need to be on multiple slices when querying all followers of the current user.</li></ul><p>So we split the relation table into:</p><ul><li>follow table: primary key id, userId, followId</li><li>attention table: primary key id, userId, attentionId</li></ul><h4 id="Horizontal-Splitting"><a href="#Horizontal-Splitting" class="headerlink" title="Horizontal Splitting"></a>Horizontal Splitting</h4><p>Horizontal splitting means deploying a distributed database using hash slicing, the slicing rules can be user ID or post ID.</p><p>If you slice by user ID, all the posts made by the same user will be saved to the same database server. The advantage is that <strong>when the system needs to find out the tweets made by a certain user, you only need to access one server to accomplish it</strong>. The disadvantage is that for a star user, there will be a lot of data access, and the access of hot data leads to excessive load pressure on that server. Similarly, if a user posts tweets frequently, this can lead to excessive data growth on a single server.</p><p>If you slice by post ID, although you can avoid the hotspot aggregation problem caused by user ID slicing, <strong>when looking up all the microblogs of a user, you need to access a randomly sliced database server</strong>, which puts too much pressure on the entire database server cluster.</p><p>All things considered, the hotspot problem caused by user ID sharding can be improved by optimizing caching. The problem of frequent tweets by a certain user</p><h3 id="4-3-Hot-Users-Problems"><a href="#4-3-Hot-Users-Problems" class="headerlink" title="4.3 Hot Users Problems"></a>4.3 Hot Users Problems</h3><p>After splitting the table, although the problem of fast table checking has been solved to some extent, the query for some hot star users still needs to be optimized. For example, the following scenarios:</p><ul><li>The hot star user has a lot of fans, and the number of rows scanned when querying the number of fans by following the table to query count is very large, and this inefficient operation will be expanded due to the increase in the number of fans.</li><li>When brushing microblogs, star users have many posts that are repeatedly viewed, if they go to the DB to get them every time a fan goes to query, there will undoubtedly be serious performance pressure.</li></ul><p>To solve the DB query problem in the above two scenarios, caching can be introduced. But the cache space is limited, we necessarily can not cache all the data, set a good cache elimination strategy is the focus of our discussion.</p><h4 id="Time-Elimination-Strategy"><a href="#Time-Elimination-Strategy" class="headerlink" title="Time Elimination Strategy"></a>Time Elimination Strategy</h4><p>For hot topics, both hot posts and hot users need to be added to the cache, and the cache elimination strategy can be set as a time elimination algorithm.</p><p>All the tweets published in the last n days are cached, and when a user refreshes the tweets, he&#x2F;she only needs to find them in the cache. If a user gets a list of 10 tweets, it will be returned directly to the user; if the number of tweets in the cache is not enough, it will be looked up in the database.</p><p>So, we can cache all the tweets published within 7 days, where the cache key is the user ID and the value is the list of post IDs published in the last 7 days. At the same time, the post ID and post content are also cached as key and value respectively.</p><h4 id="Local-Cache-Mode"><a href="#Local-Cache-Mode" class="headerlink" title="Local Cache Mode"></a>Local Cache Mode</h4><p>In addition, for especially popular microblogs, such as celebrities getting married&#x2F;divorced&#x2F;growing up in a GUI, high concurrent accesses are concentrated on a single key, which puts a great load pressure on a single redis server. Therefore, microblogging system can enable <strong>local cache mode</strong>, i.e., the application server will cache popular microblogs in the server memory, so that when users brush microblogs, it will prioritize checking whether the microblogs corresponding to the post IDs are in the local cache or not.</p><p>For big V users with more than 500w followers, we can cache all their tweets published within 48 hours to further reduce the pressure of checking hot data.</p><h2 id="5-Microblog-Publishing-Subscribing-Issues"><a href="#5-Microblog-Publishing-Subscribing-Issues" class="headerlink" title="5. Microblog Publishing&#x2F;Subscribing Issues"></a>5. Microblog Publishing&#x2F;Subscribing Issues</h2><p>The publish&#x2F;subscribe problem is the core business problem of Weibo, which is how to quickly get the latest content of all friends after following them.</p><h2 id="5-1-Push-Mode"><a href="#5-1-Push-Mode" class="headerlink" title="5.1 Push Mode"></a>5.1 Push Mode</h2><p>When a user publishes a post, he&#x2F;she immediately pushes the message to his&#x2F;her fans. However, at this time, the fans are not necessarily online, so the data needs to be stored. In this way, every time a user adds a new post, the user needs to push the post to the slice of the DB where its followers are located, and the followers can directly query the data stored in their own slice every time they browse the new message.</p><p>An obvious problem with the push model: ** If a user has tens of millions of followers, then every time the user posts a tweet, he needs to insert tens of millions of records into the subscription table, i.e., “write proliferation “** . There is no lack of zombie fans (users who are online very infrequently) among the followers, which brings the consequence that the database is under very high pressure, leading to blocking or crashing.</p><h3 id="5-2-Pull-Mode"><a href="#5-2-Pull-Mode" class="headerlink" title="5.2 Pull Mode"></a>5.2 Pull Mode</h3><p>When a user publishes a post, it is only saved in his&#x2F;her own business table. When the followers come online, they go to the followers table to read the posts and return them in chronological order.</p><p>The problem of pull mode is: if a user follows 500 star users, each time the query needs to go to each slice to query the posts published by different stars, ** a star has tens of millions or even billions of fans, which means that there may be tens of millions or even billions of read data operations at the same time, i.e., “read diffusion “** , the problem is that the database slice of the read data decompression is not reflected in the effect.</p><p>Therefore, the microblogging system firstly needs to limit the number of users’ attention, the microblogging ordinary users’ attention limit is 2,000 people, and the VIP users’ limit is 5,000 people. Secondly, minimize the number of database queries when refreshing the microblog page, and use the cache to read more posts.</p><h3 id="5-3-Combination-of-Push-and-Pull"><a href="#5-3-Combination-of-Push-and-Pull" class="headerlink" title="5.3 Combination of Push and Pull"></a>5.3 Combination of Push and Pull</h3><p>We found that even if we limit the number of friends for microblogging users, it is difficult to solve the subscription&#x2F;publishing problem of the microblogging system with a single “push mode” or “pull mode”, so we finally adopt the “combination of push and pull” mode, which is realized in the following two ways.</p><h4 id="1-Distinguish-between-big-v-stars"><a href="#1-Distinguish-between-big-v-stars" class="headerlink" title="1) Distinguish between big v-stars"></a>1) Distinguish between big v-stars</h4><p>For big v-star users (with more than 500w followers), in order to prevent write proliferation, we only need to synchronize the data to 100 database slices (assuming there are 100 data slices), which need at least three fields: userId, postId, postTime. no matter how many followers there are, there will be only 100 replicas, which avoids many zombie followers who will never go online due to data write. This avoids a lot of zombie fans that will almost never be online because of the data writing waste of network performance and storage resources.</p><p>For ordinary users, or continue to use the push mode, so that most users in the read the latest posts only need to read the corresponding slice of their own users can get the data.</p><p>This is a simpler way to design, but when the user is brushing the microblogging, ** the query process may need to query twice, respectively, the post information under the table of their own subscription, and the post information under the table of the concerned user’s publication **.</p><h4 id="2-Distinguish-online-status"><a href="#2-Distinguish-online-status" class="headerlink" title="2) Distinguish online status"></a>2) Distinguish online status</h4><p>The second way is to judge push and pull according to the user’s state: if the user is currently online, push mode is used, and the system will create a list of friends’ latest published tweets for them in the Redis cache. If a friend who follows has a new tweet published, the post information is immediately inserted into the head of the list, and when that user refreshes the tweet, he&#x2F;she just needs to return this list.</p><p><strong>If the user is not online, the system deletes the list, and when the user logs in again to refresh, the list is rebuilt for them using pull mode</strong>.</p><p>How to determine the user online? On the one hand, it can be determined by the time interval of user operation, i.e. heartbeat mechanism; on the other hand, it can also be predicted the online time of the user by machine learning, and utilize the idle time of the system to build the latest microblog list for them in advance.</p><h2 id="6-Postscript"><a href="#6-Postscript" class="headerlink" title="6. Postscript"></a>6. Postscript</h2><p>The interviewer listened to my analysis, thinking that this kid is thinking quite comprehensively, but the face can not show it. So he nodded slightly and said, “That’s all the questions I have, so is there anything you want to ask?”</p><p>So with the idea of hijacking the interview, after asking a few painless business questions, the interview was over. After all, the interviewer is going to catch me asking about the details of the architectural design, and I may never remember it again.</p><p>I’m not sure if I’m going to be able to remember the details of the architectural design. </p><p>I not only sigh this eight-legged text, architecture questions, ** memorize is really slow, forget is really fast ** ah!</p><p>Fortunately, usually have the habit of summarizing the architectural design, not to make a fool of myself, and then will also put all the architectural design encountered during the interview process on the inside of the personal GZH, those who need to move to take their own Ha~!</p>]]></content>
    
    
    <summary type="html">Microblog, as a social App, with its 1 billion registered users, can be considered a &quot;must-have&quot; in national life. So, how to design the core functions of the microblogging system, and how to ensure its high availability with high concurrency and massive data?</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="Microblog" scheme="https://www.nablepart.com/tags/Microblog/"/>
    
    <category term="registered" scheme="https://www.nablepart.com/tags/registered/"/>
    
    <category term="national" scheme="https://www.nablepart.com/tags/national/"/>
    
    <category term="concurrency" scheme="https://www.nablepart.com/tags/concurrency/"/>
    
    <category term="microblogging" scheme="https://www.nablepart.com/tags/microblogging/"/>
    
    <category term="social" scheme="https://www.nablepart.com/tags/social/"/>
    
  </entry>
  
  <entry>
    <title>Interviewer: If I ask about high availability, how would Your Excellency respond?</title>
    <link href="https://www.nablepart.com/b9642d86c7ad/"/>
    <id>https://www.nablepart.com/b9642d86c7ad/</id>
    <published>2023-11-06T08:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-Reservoir-flooding"><a href="#1-1-Reservoir-flooding" class="headerlink" title="1.1 Reservoir flooding"></a>1.1 Reservoir flooding</h2><p>Shenzhen had just sent away Typhoon Sura and welcomed Anemone.</p><p>On the same day, schools in Shenzhen once again announced the closure of classes.</p><blockquote><p>A student feedback: in shenzhen school is too “tough”, the opening week of school, closed twice.</p></blockquote><p>Workers inside OS: you guys just steal the joy, after all, as long as the traffic does not stop, we are not going to stop working.</p><blockquote><p>Shenzhen Transportation: The reservoir is full, what’s the matter with my transportation? After all, our high availability design is not a cover.</p></blockquote><h2 id="1-2-Availability-of-Shenzhen-transportation-system"><a href="#1-2-Availability-of-Shenzhen-transportation-system" class="headerlink" title="1.2 Availability of Shenzhen transportation system"></a>1.2 Availability of Shenzhen transportation system</h2><p>If the Shenzhen traffic as an application system, from the point of view of software development, flooding is an unexpected attack, the Shenzhen reservoir flooding event is actually ** to maintain the stability of the system to carry out bulk data deletion **.</p><p>Although the instantaneous data is very large, resulting in a short period of unavailability of Shenzhen’s transportation system. However, through the operation of the drainage system, temporary diversion and other measures, Shenzhen transportation, a system that serves more than 10 million users, quickly operated again.</p><p>I have to say, that’s pretty strong!</p><h3 id="Interview-Review"><a href="#Interview-Review" class="headerlink" title="Interview Review"></a>Interview Review</h3><p>What does software usability have in common with transportation usability?</p><p>The thought spreads infinitely, reminding me of an interview question from the second interview of Tencent a long time ago.</p><p>Interviewer: we know that distributed systems have three high, high performance, high concurrency and high availability. ** taking into account the performance and concurrency of the system, from what aspects will you consider to design a highly available system **?</p><p>Combined with the handling of Shenzhen traffic, I will introduce the basic concepts of high availability, performance estimation, system testing, and flow-limiting means respectively.</p><h1 id="2-High-Availability-Design"><a href="#2-High-Availability-Design" class="headerlink" title="2. High Availability Design"></a>2. High Availability Design</h1><h2 id="2-1-What-is-High-Availability"><a href="#2-1-What-is-High-Availability" class="headerlink" title="2.1 What is High Availability"></a>2.1 What is High Availability</h2><p>High Availability (HA) of a software system is one of the factors that must be considered in the architectural design of a distributed system, and it is usually measured by the time that a <strong>system provides a service</strong>.</p><p>If we take Shenzhen traffic as a system, assuming that the traffic can always run, the availability of the system is 100%.</p><p>Of course, as with rainstorms and floods, the system can suffer from sudden traffic surges, or hacker attacks, so 100% availability of the system is hard to guarantee.</p><p>! <a href="https://s2.loli.net/2023/11/07/PlK9dIfTonsvkhJ.webp"></a></p><p>For every 100 units of time the system is running, there will be 1 unit of time when the service is not available, the availability of the system is 99%.</p><p>Large software systems, such as Taobao, WeChat, etc., pursue a high availability goal of 99.9999%, commonly known as <strong>6 9</strong>, i.e., no more than 31.5 seconds of downtime throughout the year.</p><h2 id="2-2-Performance-Metrics"><a href="#2-2-Performance-Metrics" class="headerlink" title="2.2 Performance Metrics"></a>2.2 Performance Metrics</h2><p>When we design a system, we need to do a measurement of the system load capacity, i.e. <strong>Performance Estimation</strong>, and then target the high availability design.</p><p>When doing performance estimation, we have to understand a few metrics first.</p><h4 id="Response-Time"><a href="#Response-Time" class="headerlink" title="Response Time"></a>Response Time</h4><p>Response time refers to the time it takes for a <strong>client to receive the response data from the time it sends out a request</strong>, which is the most important performance indicator of the <strong>system</strong>, directly reflecting the speed of the system to process the request.</p><p>Why is the response time is the most important indicator, we take the Shenzhen transportation chestnut, the response time is in the queue time passengers need to wait for a taxi, or in the subway station queuing for the subway time.</p><p>! <a href="https://s2.loli.net/2023/11/07/fTwQNjopgcbyqJY.webp"></a></p><p>The length of the response time is a direct reflection of whether the transportation transit status is normal.</p><h4 id="Concurrency-Count"><a href="#Concurrency-Count" class="headerlink" title="Concurrency Count"></a>Concurrency Count</h4><p>Concurrency count is the number of requests <strong>the system handles at the same time</strong>, reflecting the <strong>load pressure</strong> of the system. When we do performance testing, we usually use multiple threads to simulate the number of concurrent users, each thread acts as a user request, and the number of threads is the number of concurrency obtained in the performance metrics.</p><p>The concurrency number of a transportation system can be understood as the sum of the number of passengers in all transportation.</p><h4 id="Throughput"><a href="#Throughput" class="headerlink" title="Throughput"></a>Throughput</h4><p>Throughput is <strong>the number of requests processed by the system per unit of time</strong>, which reflects the size of the system’s ability to handle the business. It is generally measured in terms of HPS (Requests Per Second, Hits Per Second), QPS (Queries Per Second, Qeries Per Second) and TPS (Transactions Per Second, Transactions Per Second).</p><p>During the normal operation phase of a system, the relationship between these three metrics is: <strong>Throughput &#x3D; Number of Concurrencies &#x2F; Response Time</strong>.</p><p>Throughput in a transportation system reflects whether the traffic is flowing smoothly and how much capacity the system is carrying.</p><h2 id="2-3-System-Testing"><a href="#2-3-System-Testing" class="headerlink" title="2.3 System Testing"></a>2.3 System Testing</h2><p>After familiarizing ourselves with the performance indicators of the system, we can start to do system testing. The specific process is as follows: ** Continuously increase the concurrency of the system, so as to test out the system’s ability to resist pressure, as well as the system’s performance threshold **.</p><p>! <a href="https://s2.loli.net/2023/11/07/VwYdUZNRAEcQaPK.webp"></a></p><p>System testing is divided into three phases, including <strong>performance testing, load testing</strong> and <strong>stress testing</strong>.</p><h4 id="Performance-Testing"><a href="#Performance-Testing" class="headerlink" title="Performance Testing"></a>Performance Testing</h4><p>In the early stage of system design, we will first plan an expected target for performance. During the testing period, <strong>continuously apply pressure on the system to verify whether the system achieves the expected performance target</strong> within the acceptable range of resources.</p><p>Take Shenzhen transportation as an example, we design a subway system at the initial stage, through research and estimation, etc., based on the size of the daily flow of people to design the subway system.</p><p>Performance testing can be interpreted as increasing the number of passengers under a computer model to estimate whether the subway system can achieve the expected transportation capacity.</p><h4 id="Load-Testing"><a href="#Load-Testing" class="headerlink" title="Load Testing"></a>Load Testing</h4><p>Constantly applying concurrent requests to a system, increasing the stress on the system <strong>until one or more of the system’s metrics reaches a safety threshold</strong>.</p><p>The same is true for transportation systems, if there are too many objects (people or cars) involved in the traffic, the overall transportation capacity will be affected! That’s why all the first-tier cities limit the number of cars on the road.</p><h4 id="Stress-test"><a href="#Stress-test" class="headerlink" title="Stress test"></a>Stress test</h4><p>Exceeding the safe load, increasing the number of concurrent requests and continuing to stress the system <strong>until the system crashes or no more requests are processed</strong>, at which point the number of concurrent requests is the system’s <strong>maximum stress tolerance</strong>.</p><p>Stress test is to measure the maximum capacity of the system bottleneck, take the traffic system as an example, if the number of passengers in a moment to reach the maximum capacity value, Shenzhen traffic will immediately paralyze.</p><blockquote><p>PS：The maximum pressure capability is a <strong>theoretical value</strong> that may not be reached in practice, but must be estimated during system design.</p></blockquote><h1 id="3-“Flow-Limiting”-for-Highly-Available-Designs"><a href="#3-“Flow-Limiting”-for-Highly-Available-Designs" class="headerlink" title="3. “Flow Limiting” for Highly Available Designs"></a>3. “Flow Limiting” for Highly Available Designs</h1><h2 id="3-1-Flow-limiting"><a href="#3-1-Flow-limiting" class="headerlink" title="3.1 Flow limiting"></a>3.1 Flow limiting</h2><p>Limit flow in the early stage of system design generally do not need to consider, and when the number of users increases, ** when the system’s processing capacity can not cope with the external sudden increase in traffic access, in order to let the system to maintain stability, must be taken to limit the flow of measures **.</p><p>Inside the transportation system, the most common flow restriction is the foreign vehicle restriction.</p><p>! <a href="https://s2.loli.net/2023/11/07/jA7kBMvYsPnzTu8.webp"></a></p><p>For example, in Shenzhen, foreign cars are only allowed to travel on specified dates (weekends or holidays), or are not allowed to travel during specified peak hours (weekdays from 7am to 9am, and from 5:30pm to 7:30pm), all of which belong to the flow restriction to maintain the normal operation of the transportation system.</p><h3 id="1-Traffic-Restriction-Indicators"><a href="#1-Traffic-Restriction-Indicators" class="headerlink" title="1) Traffic Restriction Indicators"></a>1) Traffic Restriction Indicators</h3><h4 id="TPS"><a href="#TPS" class="headerlink" title="TPS"></a>TPS</h4><p>Transactions Per Second, <strong>The number of transactions completed per second</strong>. This is the most reasonable value to limit the flow, but it is not very practical, because in distributed business systems, transactions often need more than one module to complete.</p><p>To limit the flow according to TPS, the time granularity may be very large, and it is difficult to accurately assess the response performance of the system.</p><p>In the transportation system, if we take the passengers from Huaqiangbei in Futian District to Kexingyuan in Nanshan District as a transaction, then limiting the number of passengers from Huaqiangbei to Nanshan District is obviously not possible.</p><p>the number of passengers is obviously unlikely. Because in this process, passengers may pass through the subway, buses and other means of transportation, i.e., <strong>a transaction requires the cooperation of multiple modules to complete</strong>.</p><h4 id="HPS"><a href="#HPS" class="headerlink" title="HPS"></a>HPS</h4><p>Hits Per Second, <strong>Hits Per Second</strong>. If each transaction completes one request, then TPS and HPS are equivalent. However, in a distributed scenario, multiple requests may be required to complete a transaction, so TPS and HPS are not equivalent.</p><p>Like the example in TPS, if a passenger needs only one direct bus ride from Huaqiangbei to Kexingyuan, then only one transportation participation (one request) is needed to complete the transaction.</p><h4 id="QPS"><a href="#QPS" class="headerlink" title="QPS"></a>QPS</h4><p>Query Per Second, <strong>the number of client queries that can be responded to per second</strong>, is also an important measure of overall server performance.</p><p>Inside the transportation system, QPS can be understood as the sum of the number of passengers accommodated. Currently, most mainstream traffic limiting methods use HPS as a traffic limiting index.</p><h3 id="2-Flow-limiting-methods"><a href="#2-Flow-limiting-methods" class="headerlink" title="2) Flow limiting methods"></a>2) Flow limiting methods</h3><h4 id="1-Traffic-counter"><a href="#1-Traffic-counter" class="headerlink" title="1. Traffic counter"></a>1. Traffic counter</h4><p>The simplest and most direct way to limit the flow, for example, limiting the maximum number of requests to 100 within 5 seconds, ** access will be denied if the number exceeds this number**. Traffic counting in a transportation system limits the number of vehicles allowed to pass within a certain period of time.</p><p>There are two obvious problems with this method of limiting traffic.</p><h5 id="centralized-access"><a href="#centralized-access" class="headerlink" title="centralized access"></a>centralized access</h5><p>It is difficult to control the unit time, and it is easy to have centralized access. For example, the following scenario occurs:</p><p>! <a href="https://s2.loli.net/2023/11/07/r9PpobgTjzfVeLS.webp"></a></p><p>In the first 4 seconds, there is only one access, in the 5th second, there are 99 accesses; in the 6th second, there are 99 accesses, and in the next 4 seconds, there is only 1 access. Globally, the 200 requests in 10 seconds is not exceeded, but the graph shows that this traffic situation is definitely an anomaly.</p><h5 id="Unnecessary-Traffic-Limiting"><a href="#Unnecessary-Traffic-Limiting" class="headerlink" title="Unnecessary Traffic Limiting"></a>Unnecessary Traffic Limiting</h5><p>There is a period of time when the traffic is exceeded, but it is not always really necessary to limit the traffic. For example, the following scenario:</p><p><img src="https://s2.loli.net/2023/11/07/AJb8d2DzVlvcwqr.webp"></p><p>If the two middle blocks of accesses happen to be in a 5-second cycle, then the traffic is exceeding the limit. In this case, the next 10 requests may be dropped, which doesn’t make sense.</p><h4 id="2-Sliding-Time-Window"><a href="#2-Sliding-Time-Window" class="headerlink" title="2. Sliding Time Window"></a>2. Sliding Time Window</h4><p>Sliding time window is a popular algorithm for limiting traffic, the main idea is to consider time as a forward scrolling window, as shown in the following figure:</p><p>! <a href="https://s2.loli.net/2023/11/07/RLWX3dZuVwslFIc.webp"></a></p><p>It is characterized by the fact that ** time is processed in slices, and the sliding window counts the number of requests in a total time period each time. At the next time period, the previous time slice is discarded and the number of requests from the following time slice is added ** It solves the problems that can occur with traffic counters.</p><p>Its disadvantage is that it does not have a fine enough control over the traffic to limit the traffic concentrated in a short period of time and once the limit is reached, the requests are all violently rejected directly.</p><p>Inside the transportation system this may result in the loss of a portion of the passing traffic, which is not very friendly to the user experience.</p><h4 id="3-Leaky-Bucket-Algorithm"><a href="#3-Leaky-Bucket-Algorithm" class="headerlink" title="3. Leaky Bucket Algorithm"></a>3. Leaky Bucket Algorithm</h4><p>The idea of leaky bucket algorithm is shown below:</p><p>! <a href="https://s2.loli.net/2023/11/07/lUgp6W9LYyVI4sx.webp"></a></p><p>A leaky bucket is a fixed-size queue that <strong>caches the requests sent by the client and then sends them evenly to the server</strong>.</p><p>If the client request rate is too fast and the queue of the leaky bucket is full, it will either reject it directly or go <strong>downgrade</strong> the processing logic without impacting the server side.</p><p>The advantage of the leaky bucket algorithm is that it is simple to implement and can use <strong>message queues</strong> to shave peaks and fill valleys. But it also has several problems:</p><ul><li>Leaky Bucket size is not easy to control, too large will bring more pressure to the server, too small may lead to a large number of requests are discarded;</li><li>The rate of requests given to the server by the leaky bucket is difficult to control;</li><li>The use of cached requests will make the response time of the request longer.</li></ul><h4 id="4-Token-Bucket-Algorithm"><a href="#4-Token-Bucket-Algorithm" class="headerlink" title="4. Token Bucket Algorithm"></a>4. Token Bucket Algorithm</h4><p>Token bucket algorithm and go to the hospital registration is almost the same logic, see the doctor before you need to register, and the hospital every day to put the number is limited:</p><p>! <a href="https://s2.loli.net/2023/11/07/pmOlaVLI9oDQJtK.webp"></a></p><p>Token bucket algorithm in the token is the same, the client before sending a request, need to first get the token from the token bucket. If it fetches it, it sends the request; if it fails to fetch the token, it can only be rejected and needs to wait.</p><p>The token bucket algorithm solves the three problems of the leaky bucket algorithm (the rate is difficult to control, the size of the leaky bucket is difficult to control and the problem of the request cycle becomes longer), and the implementation is not complex, the use of semaphores can be realized. In the actual flow limiting scenarios used in the most, such as Google and Guava in the use of token buckets to limit the flow.</p><p>Inside the transportation system, the license plate shaking is the token bucket flow limiting idea. Therefore, ** software systems serve life, software design also comes from life **.</p><h4 id="5-How-to-limit-flow-in-distributed-scenarios"><a href="#5-How-to-limit-flow-in-distributed-scenarios" class="headerlink" title="5. How to limit flow in distributed scenarios"></a>5. How to limit flow in distributed scenarios</h4><p>In distributed scenarios, can the above flow limiting scheme still be applied? Take a chestnut:</p><p>! <a href="https://s2.loli.net/2023/11/07/481LjrNFtgz9fBA.webp"></a></p><p>If we put the token into a separate middleware (e.g., Redis) for the whole distributed system, then the client needs to interact with the token bucket when invoking the combined service, and the combined service invokes the order, inventory, and account services, and the number of interactions obviously increases a lot.</p><p>One improvement is that the client first obtains four tokens before invoking the service, ** subtracts one token when invoking the combination service and passes three tokens to the combination service, and consumes one token when invoking the sub-services ** respectively.</p><p>In the transportation system, the distributed token bucket design is equivalent to a single transportation that needs to cross multiple hurdles, each of which requires a traffic limit token as a way to maintain the stability of the whole system.</p><h1 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h1><p>For flow limiting, it is important to choose a suitable flow limiting algorithm. Token bucket algorithms have more obvious advantages, and they are also the flow limiting algorithms used in many large-scale projects.</p><p>When designing the system, these patterns need to be matched with the evaluation of business volume and performance test data for threshold modification, and these thresholds are best saved in a hot-updatable configuration center for real-time modification.</p><blockquote><p>Interviewer: hmmm, good, besides flow limiting, are there any other methods that can be used in system design to maintain high availability?</p></blockquote><p>** Flow limiting, fusing and service degradation are all important design approaches for high system availability**.</p><p>Fusing, the equivalent of putting a fuse between requests and services. When the service can’t carry the constant access pressure, the fuse breaks, preventing the server from crashing if it can’t carry the pressure.</p><p>Compared to flow limiting and fusing, <strong>service degradation is considered from a global system perspective</strong>. After a service fuses, it will generally let the request go through the processing method that implements the configuration, and this processing method is a degradation logic.</p><blockquote><p>Interviewer: ok, interview time is almost up, let’s not expand on this for now, let’s do an algorithmic question ……</p></blockquote><p>Due to the limited space, in this issue we only introduced the indicators and methods related to flow limiting.</p>]]></content>
    
    
    <summary type="html">If the Shenzhen traffic as an application system, from the perspective of software development, flooding is an accidental attack, Shenzhen Reservoir flooding event is actually to maintain the stability of the system for bulk data deletion. Compare and contrast with transportation availability, what is common to software availability?</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Interview" scheme="https://www.nablepart.com/tags/Interview/"/>
    
    <category term="Architecture" scheme="https://www.nablepart.com/tags/Architecture/"/>
    
    <category term="system" scheme="https://www.nablepart.com/tags/system/"/>
    
    <category term="transportation" scheme="https://www.nablepart.com/tags/transportation/"/>
    
    <category term="software" scheme="https://www.nablepart.com/tags/software/"/>
    
  </entry>
  
  <entry>
    <title>How can you write code if you can&#39;t use map?</title>
    <link href="https://www.nablepart.com/ebe9529b6e25/"/>
    <id>https://www.nablepart.com/ebe9529b6e25/</id>
    <published>2023-11-06T06:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>catalogs</p><ol><li>introduction</li><li>map substructure</li><li>GET and PUT operations</li><li>DELETE operation</li><li>map expansion conditions</li><li>Considerations for using map</li><li>Postscript</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>As a gopher (go as the development language of the program ape), every time in the face of the technical face of the Internet factory, will be asked to some go data structure or its unique mechanism of the underlying implementation.</p><p>For example, when I interviewed for the “Tencent TEG” department, an elegant and easy-going interviewer sitting across the video conference asked, “You usually use go language, right, so tell me what reference types are in go language?”</p><p>I thought, this is not difficult to me, so slowly answered: “ **go language reference types and slice (slice), pipeline (channel), function (func), interface (interface ) and dictionary (map) ** “.</p><p>The interviewer added, “I see that you have not worked for a long time, so I won’t ask you about the underlying mechanism of channel, which may be a bit difficult! In this way, you say a map of the underlying implementation, and map add and delete data and expansion of what operations”.</p><p>“Hey, I’m so grumpy, it means I can’t answer the question of channel, who am I looking down on?” I can’t help but think that this interviewer is so …… The bottom of the channel I understand the CSP, elegantly close the channel and other logic, if you ask about the source code implementation, I really will not!</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e2a5fc8e8ef24fb9a7f769d257c9baee%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>However, the implementation mechanism of map in go is not very complicated, and there is a difference in design between other languages, and I spent time reviewing it before the interview. It seems that _the attentive interviewer is not only trying to find the most suitable candidate, but also the one who knows the most about them. _</p><h2 id="2-map’s-underlying-data-structures"><a href="#2-map’s-underlying-data-structures" class="headerlink" title="2. map’s underlying data structures"></a>2. map’s underlying data structures</h2><p>So, I’ll describe the underlying implementation of a map in terms of its data structure, additions, deletions, and expansions.</p><p>First, we can find the underlying data structure code for map in the <code>runtime/map.go</code> package that comes with the go language:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> hmap <span class="keyword">struct</span> &#123;</span><br><span class="line">     count     <span class="type">int</span>    </span><br><span class="line">     B         <span class="type">uint8</span>  </span><br><span class="line">     overflow  <span class="type">uint16</span></span><br><span class="line"></span><br><span class="line">     buckets    unsafe.Pointer</span><br><span class="line">     oldbuckets unsafe.Pointer  </span><br><span class="line"></span><br><span class="line">     extra *mapextra</span><br><span class="line">     ...</span><br><span class="line"></span><br><span class="line"> &#125;</span><br></pre></td></tr></table></figure><blockquote><p>Go maps are implemented as hmap structures. hmap records several attributes of a map, including the number of elements, the number of buckets [2^B], the addresses of the map’s buckets, and the addresses of the overflow buckets [we’ll talk about the concept of overflow buckets later, see below].</p></blockquote><p>Among them, the structure of the extra overflow bucket is as follows, and the parameters record the address of the overflow bucket and the pointers to the upper and lower buckets, respectively:</p><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">type mapextra struct &#123;</span><br><span class="line">    <span class="attribute">overflow</span>    *<span class="selector-attr">[]</span>*bmap</span><br><span class="line">    oldoverflow *<span class="selector-attr">[]</span>*bmap</span><br><span class="line">    nextOverflow *bmap</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>buckets records the address of the bmap that holds the map bucket (<strong>a bucket can be thought of as a bmap bucket</strong>). bmap is the key structure that records the actual data of the map, and its structure when it is not compiled is as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> bmap <span class="keyword">struct</span> &#123;</span><br><span class="line">    tophash [bucketCnt]<span class="type">uint8</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><blockquote><p>tophash records the first 8 bits of the key in the map data, used to quickly compare whether the key exists in the map or not.</p></blockquote><p>While the program is being compiled, the structure of bmap is as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">type</span> bmap <span class="keyword">struct</span> &#123;</span><br><span class="line">    tophash [<span class="number">8</span>]<span class="type">uint8</span></span><br><span class="line">    data    <span class="type">byte</span>[<span class="number">1</span>]  </span><br><span class="line">    overflow <span class="type">uintptr</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The <code>overflow</code> in bmap records the address of the next bmap, where overflow is of type <code>uintptr</code> instead of *bmap, in order to <strong>guarantee that the bmap does not contain any pointers at all, so as to minimize gc scanning</strong>.</p><p>But then the overflow bucket will be missing references, and may be gc’d when used because the element is empty, so hmap adds <code>extra</code> to store the pointer to the overflow bucket.</p><p>The <code>data</code> parameter stores the real map data (k-v key-value pairs), each bmap stores 8 k-v key-value pairs, and key-value pairs in the bmap are stored <strong>separately and consecutively</strong> as shown below:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/85f63ffeee214f089a5315dff0db167f%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><blockquote><p>When there are more than 8 k-v pairs, a new bmap is generated to store the data, and the overflow records the address of the new bmap.</p></blockquote><p><strong>summarize</strong></p><p>In the go implementation of map, the structure represented is hmap, and hmap maintains a number of bucket buckets (i.e., bmap buckets, which are subsequently called buckets for the sake of uniformity). For intuitive understanding, we refine the key fields in hmap to graphical patterns:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/5096e21c9e8d4c5e9ca92b2bd11d8393~tplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"> Each element of the <code>buckets</code> array is a bmap structure. Each <code>bmap</code> bucket holds 8 k-v pairs, and if there are more than 8, a new bmap will be generated, and the overflow will record the address of the new bmap.</p><h2 id="3-GET-and-PUT-operations"><a href="#3-GET-and-PUT-operations" class="headerlink" title="3. GET and PUT operations"></a>3. GET and PUT operations</h2><p>After understanding the data structure of a map, we will learn how to access the data in a map.</p><h2 id="3-1-GET-Getting-Data"><a href="#3-1-GET-Getting-Data" class="headerlink" title="3.1 GET Getting Data"></a>3.1 GET Getting Data</h2><p>Assuming that B&#x3D;4, i.e. the number of buckets is <code>2^B = 16</code>, suppose we want to get the value corresponding to key5 from the map.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">m := <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">string</span>]<span class="type">string</span>, <span class="number">0</span>)</span><br><span class="line">...</span><br><span class="line"></span><br><span class="line">fmt.Println(m[key5])</span><br></pre></td></tr></table></figure><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/6d4c84a8a39a4127989e8ca328c4b88c%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As shown above, the get process is divided into. 1:</p><ol><li>Calculate the hash value of k5 [64-bit operating system, the result has 64 bits]. 2;</li><li>determine which bucket is in the last B bit of the **hash value **, 0100 is converted to decimal 4, so it is in bucket 4. 3. determine which bucket is in the first 8 bits of the hash value and the &#96;tophash value;</li><li>according to the hash value of the first 8 bits and <code>tophash</code> for comparison, quickly determine in which bucket position, the next step. 4. the query key5 and the <code>tophash</code> to the next step;</li><li>compare the key5 of the query with the key5 of the <code>bmap</code>, and get the corresponding value5 if it matches exactly. 5. if no key5 is found in the current bucket, then get the corresponding value5;</li><li>if not found in the current bucket, go to the next overflow bucket connected by <code>overflow</code>, and repeat steps 3-4.</li></ol><h3 id="Repeat-steps-3-4-3-2-Storing-Data-in-the-PUT"><a href="#Repeat-steps-3-4-3-2-Storing-Data-in-the-PUT" class="headerlink" title="Repeat steps 3-4. 3.2 Storing Data in the PUT"></a>Repeat steps 3-4. 3.2 Storing Data in the PUT</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/4dd5ce329ba140b99eb579a7cc3a8566%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>The map assignment can be divided into the following steps (assuming we add key6):</p><ul><li>Determine which bucket it is by the last B bits of the hash value of key6;</li><li>Iterate over the current bucket, compare the first 8 bits of the hash value of the key with <code>tophash</code> to prevent the key from being repeated, and then find the location where the elements are stored, i.e., insert the data at the first empty position of the <strong>bmap</strong>;<br>** If the current bucket is full, a new <code>bmap</code> bucket is created to insert the elements, then the address of the new bmap is recorded with <code>overflow</code> and a reference to the new bucket is added to the <code>extra</code> pointer array in the hmap.</li></ul><h3 id="3-3-About-hash-conflicts"><a href="#3-3-About-hash-conflicts" class="headerlink" title="3.3 About hash conflicts"></a>3.3 About hash conflicts</h3><p>**A hash conflict occurs when two different keys fall into the same bucket. When the bucket is not full, it is inserted using open addressing to find the first empty space from front to back. When the bmap bucket already has 8 k-v key-value pairs, a new overflow bucket (bmap) is created and the address of the new bmap is recorded by <code>overflow</code> and a reference to the new bucket is added to the hmap’s <code>extra</code> pointer array.</p><h2 id="4-The-DELETE-operation"><a href="#4-The-DELETE-operation" class="headerlink" title="4. The DELETE operation"></a>4. The DELETE operation</h2><p>When a map is deleted:</p><ul><li>If the deleted element is a value type, such as int, float, bool, string, and array, the map’s memory is not automatically freed;</li><li>If the element is a reference type, such as pointer, slice, map, and chan, part of the map memory will be freed, but the memory freed is the memory occupied by the reference type of the child element, and the memory occupied by the map itself is not affected;</li><li>The memory will be released only after map is set to nil, and the memory will be reclaimed in the next GC.</li></ul><p>So, ** map will not release memory immediately after deletion**, let’s verify this.</p><h3 id="4-1-When-map-is-a-basic-type"><a href="#4-1-When-map-is-a-basic-type" class="headerlink" title="4.1 When map is a basic type"></a>4.1 When map is a basic type</h3><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="selector-tag">var</span> intMap map<span class="selector-attr">[int]</span>int</span><br><span class="line"><span class="selector-tag">var</span> cnt = <span class="number">8192</span></span><br><span class="line">​</span><br><span class="line"></span><br><span class="line">func <span class="built_in">printMemStats</span>() &#123;</span><br><span class="line">    <span class="selector-tag">var</span> m runtime<span class="selector-class">.MemStats</span></span><br><span class="line">    runtime<span class="selector-class">.ReadMemStats</span>(&amp;m)</span><br><span class="line">​</span><br><span class="line">    log<span class="selector-class">.Printf</span>(&quot;Alloc = %vKB, TotalAlloc = %vKB, Sys = %vKB, NumGC = %v\n&quot;,</span><br><span class="line">              m.Alloc/<span class="number">1024</span>, m.TotalAlloc/<span class="number">1024</span>, m.Sys/<span class="number">1024</span>, m.NumGC)</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func <span class="built_in">initMap</span>() &#123;</span><br><span class="line">    intMap = <span class="built_in">make</span>(map[int]int, cnt)</span><br><span class="line">    for <span class="selector-tag">i</span> := <span class="number">0</span>; <span class="selector-tag">i</span> &lt; cnt; <span class="selector-tag">i</span>++ &#123;</span><br><span class="line">       intMap<span class="selector-attr">[i]</span> = <span class="selector-tag">i</span></span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"></span><br><span class="line">func <span class="built_in">delMapKey</span>() &#123;</span><br><span class="line">    for <span class="selector-tag">i</span> := <span class="number">0</span>; <span class="selector-tag">i</span> &lt; cnt; <span class="selector-tag">i</span>++ &#123;</span><br><span class="line">       <span class="built_in">delete</span>(intMap, i)</span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">func <span class="selector-tag">main</span>() &#123;</span><br><span class="line">    <span class="built_in">printMemStats</span>()</span><br><span class="line">​</span><br><span class="line">    <span class="built_in">initMap</span>()</span><br><span class="line">    log<span class="selector-class">.Println</span>(&quot;after initMap, len(map) =&quot;, <span class="built_in">len</span>(intMap))</span><br><span class="line"></span><br><span class="line">    runtime<span class="selector-class">.GC</span>()</span><br><span class="line">    <span class="built_in">printMemStats</span>()</span><br><span class="line">​</span><br><span class="line">    <span class="built_in">delMapKey</span>()</span><br><span class="line">    log<span class="selector-class">.Println</span>(&quot;after delMapKey, len(map) =&quot;, <span class="built_in">len</span>(intMap))</span><br><span class="line">​</span><br><span class="line">    runtime<span class="selector-class">.GC</span>()</span><br><span class="line">    <span class="built_in">printMemStats</span>()</span><br><span class="line">​</span><br><span class="line">    intMap = nil</span><br><span class="line">    runtime<span class="selector-class">.GC</span>()</span><br><span class="line">    <span class="built_in">printMemStats</span>()</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The final printout is:</p><blockquote><p>Alloc &#x3D; 108, TotalAlloc &#x3D; 108, Sys &#x3D; 6292, NumGC &#x3D; 0<br>after initMap, len(map) &#x3D; 8192<br>Alloc &#x3D; 410, TotalAlloc &#x3D; 424, Sys &#x3D; 6867, NumGC &#x3D; 1<br>after delMapKey, len(map) &#x3D; 0<br>Alloc &#x3D; 410, TotalAlloc &#x3D; 425, Sys &#x3D; 6931, NumGC &#x3D; 2<br>Alloc &#x3D; 99, TotalAlloc &#x3D; 427, Sys &#x3D; 6931, NumGC &#x3D; 3</p></blockquote><p>Where Alloc is the memory space occupied by the current heap object in KB, TotalAlloc is the cumulative space allocated for heap objects, Sys is the memory space occupied by the operating system, and NumGC is the number of garbage collection attempts. It can be clearly seen that ** when map deletes a basic type element, no matter how much GC is done, the memory is not freed. **</p><h3 id="4-2-When-map-values-are-reference-types"><a href="#4-2-When-map-values-are-reference-types" class="headerlink" title="4.2 When map values are reference types"></a>4.2 When map values are reference types</h3><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> Person <span class="keyword">struct</span> &#123;</span><br><span class="line">    Name <span class="type">string</span></span><br><span class="line">    Age  <span class="type">int</span></span><br><span class="line">&#125;</span><br><span class="line"><span class="keyword">var</span> intMap <span class="keyword">map</span>[<span class="type">int</span>]*Person</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">initMap</span><span class="params">()</span></span> &#123;</span><br><span class="line">    intMap = <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">int</span>]*Person, cnt)</span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i &lt; cnt; i++ &#123;</span><br><span class="line">       intMap[i] = &amp;Person&#123;</span><br><span class="line">          Name: <span class="string">&quot;zhangsan&quot;</span>,</span><br><span class="line">          Age:  <span class="number">20</span>,</span><br><span class="line">      &#125;</span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Change the value type of map to a reference type, leave the rest of the code unchanged, and run the result again:</p><blockquote><p>Alloc &#x3D; 108, TotalAlloc &#x3D; 108, Sys &#x3D; 6036, NumGC &#x3D; 0<br>after initMap, len(map) &#x3D; 8192<br>Alloc &#x3D; 601, TotalAlloc &#x3D; 615, Sys &#x3D; 6867, NumGC &#x3D; 1<br>after delMapKey, len(map) &#x3D; 0<br>Alloc &#x3D; 409, TotalAlloc &#x3D; 615, Sys &#x3D; 6931, NumGC &#x3D; 2<br>Alloc &#x3D; 99, TotalAlloc &#x3D; 618, Sys &#x3D; 6931, NumGC &#x3D; 3</p></blockquote><p>Comparing the two examples, we can find that when the map value is a basic type, the deletion of the key will not release space; while when the map value is a reference type, it will release part of the space (space on the heap of the value), but the memory occupied by the map itself is not reduced, why is this?</p><p>This is because the bottom of map will only increase or decrease the number of bmap buckets after ** applying for them**. The delete operation of the map will only set the data on the bmap to nil (if the value is of pointer type, the pointer object will be reclaimed in the next GC), but the memory occupied by the bucket itself remains unchanged, so the memory occupied by the map itself will not be reduced because of the deleted key.</p><p>In particular, if there is an empty space in the bmap bucket after a key is deleted, and a new key is added later, the empty space in the bucket may be filled**, and the memory footprint of the map remains unchanged.</p><h3 id="4-3-How-to-Solve-Memory-Leakage-Caused-by-a-Map"><a href="#4-3-How-to-Solve-Memory-Leakage-Caused-by-a-Map" class="headerlink" title="4.3 How to Solve Memory Leakage Caused by a Map"></a>4.3 How to Solve Memory Leakage Caused by a Map</h3><p>When a map is frequently added or deleted, or too many keys are added (triggering a map expansion), even if ** these keys are deleted, the memory space will still be unavailable, thus triggering a memory leak. ** This can be solved by the following method.</p><p>We can use the following methods to solve the problem:</p><p>** Set the map that is no longer in use to nil, or restart the system periodically to allow the map to be reallocated;</p><ul><li>When the value of a map consumes too much memory, change the value to a pointer;</li><li>Periodically copy the elements of the map to another map.</li></ul><p>Generally speaking, most of the Internet businesses with high traffic impact are ToC scenarios, and the on-line frequency is very high. Some businesses may go online several times a day and restart and recover before the problem is exposed, which is not a big problem 🐶.</p><h2 id="5-map-expansion-conditions"><a href="#5-map-expansion-conditions" class="headerlink" title="5. map expansion conditions"></a>5. map expansion conditions</h2><h2 id="5-1-Expansion-of-the-same-capacity"><a href="#5-1-Expansion-of-the-same-capacity" class="headerlink" title="5.1 Expansion of the same capacity"></a>5.1 Expansion of the same capacity</h2><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/7e62339e501c41f19cee9914e23038e3%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Since map constantly puts and deletes elements, there may be a lot of intermittent empty bits in the bucket, which can cause bmap to overflow the bucket by a lot, resulting in longer scan times. So, this kind of expansion is actually a kind of collation, organizing the data of the backward bits to the front. ** With equal expansion, the elements are rearranged, but the buckets are not swapped, and the number of element buckets does not increase. **</p><h3 id="5-2-Two-volume-expansion"><a href="#5-2-Two-volume-expansion" class="headerlink" title="5.2 Two-volume expansion"></a>5.2 Two-volume expansion</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/26c90c2ccd704c6ebaafd3fe153011da%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>When the bucket array is not enough, the expansion will occur, then, ** elements will be rearranged, bmap bucket increase, the key elements inside the bucket may also be migrated **.</p><p>As shown in the figure, before expansion B &#x3D; 2, after expansion B &#x3D; 3, assuming that an element key hash value of the last three for 101, then by the introduction of the above can be seen, before expansion, hash value of the last two to determine the number of buckets. That is, ** before expansion, the last two digits of an element are 01, and the element is in bucket 1; after expansion, the last three digits of an element are 101, and the element needs to be migrated to bucket 5**.</p><h3 id="5-3-Conditions-for-expansion-to-occur"><a href="#5-3-Conditions-for-expansion-to-occur" class="headerlink" title="5.3 Conditions for expansion to occur"></a>5.3 Conditions for expansion to occur</h3><h4 id="1-Expansion-condition-1-loading-factor-6-5"><a href="#1-Expansion-condition-1-loading-factor-6-5" class="headerlink" title="1) Expansion condition 1: loading factor &gt; 6.5"></a>1) Expansion condition 1: loading factor &gt; 6.5</h4><p>Under normal circumstances, if there are no overflow buckets, there are at most 8 elements in a bucket. When the average number of elements in each bucket exceeds 6.5, it means that the capacity of the current bucket is almost full and needs to be expanded.</p><blockquote><p>loadFactor &#x3D; number of elements in map &#x2F; number of current buckets in map, i.e. loadFactor &#x3D; count &#x2F; (2^B)<br>From the formula, <strong>loadFactor is the average number of elements in each bucket in the current buckets</strong>.</p></blockquote><h4 id="2-Expansion-Condition-2-Excessive-number-of-overflow-buckets"><a href="#2-Expansion-Condition-2-Excessive-number-of-overflow-buckets" class="headerlink" title="2) Expansion Condition 2: Excessive number of overflow buckets"></a>2) Expansion Condition 2: Excessive number of overflow buckets</h4><ul><li><p>When B &lt; 15, if the number of overflowed bmaps reaches 2^B;</p></li><li><p>When B &gt;&#x3D; 15, if the number of overflow’s bmap exceeds 2^15, then start to expand the capacity.</p></li></ul><p>The reason for too many overflow buckets is that a large number of keys in the map have the same hash value after the B-bit, which makes individual bmap buckets keep inserting new data, which leads to longer and longer chains of overflow buckets.</p><p>Thus, when the map is being added, deleted, modified and checked, the scanning speed will become slower and slower. When expanding the capacity, you can rearrange the <strong>elements of these overflow buckets to make the position of the elements in the bucket more average</strong>, so as to improve the efficiency of scanning.</p><h4 id="3-Details-when-expanding"><a href="#3-Details-when-expanding" class="headerlink" title="3) Details when expanding"></a>3) Details when expanding</h4><ol><li>In our hmap structure, there is an oldbuckets, when the expansion occurs, the old data will be stored in this first and then expand the buckets, at this time, the capacity of the buckets is twice the oldbuckets. 2. map expansion is an incremental increase in the capacity of the oldbuckets, so that the oldbuckets can be expanded;</li><li>map expansion is an incremental expansion, not a full expansion, and every time the map is deleted or modified, the operation of migrating from oldbuckets to buckets will be triggered. This is because full expansion with a large number of keys will consume a lot of resources and cause the program to stall. 3;</li><li>Before the migration is complete, each get or put traverses the oldbuckets before traversing the buckets.</li></ol><h2 id="6-Notes-on-map"><a href="#6-Notes-on-map" class="headerlink" title="6. Notes on map"></a>6. Notes on map</h2><h4 id="1-Do-not-address-elements"><a href="#1-Do-not-address-elements" class="headerlink" title="1) Do not address elements"></a>1) Do not address elements</h4><p>** As the elements of a map grow, the map bottom may reallocate space, resulting in invalid addresses before ** Let’s look at an example:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> Student <span class="keyword">struct</span> &#123;</span><br><span class="line">    Name <span class="type">string</span></span><br><span class="line">    Age  <span class="type">int</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">f1</span><span class="params">()</span></span> &#123;</span><br><span class="line">    m := <span class="keyword">map</span>[<span class="type">int</span>]Student&#123;</span><br><span class="line">        <span class="number">1</span>: Student&#123;Age: <span class="number">15</span>, Name: <span class="string">&quot;jack&quot;</span>&#125;,</span><br><span class="line">        <span class="number">2</span>: Student&#123;Age: <span class="number">16</span>, Name: <span class="string">&quot;danny&quot;</span>&#125;,</span><br><span class="line">        <span class="number">3</span>: Student&#123;Age: <span class="number">17</span>, Name: <span class="string">&quot;andy&quot;</span>&#125;,</span><br><span class="line">    &#125;</span><br><span class="line">    m[<span class="number">1</span>].Name = <span class="string">&quot;JACK&quot;</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>A compilation error occurs in this case because map cannot be addressed. That is, you can get m[1], but you cannot make any changes to its value. If you want to modify value, you can use value with a pointer as follows:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">func f2() &#123;</span><br><span class="line">    m := map[int]*Student&#123;</span><br><span class="line">        <span class="number">1</span>: &amp;Student&#123;Age: <span class="number">15</span>, Name: <span class="string">&quot;jack&quot;</span>&#125;,</span><br><span class="line">        <span class="number">2</span>: &amp;Student&#123;Age: <span class="number">16</span>, Name: <span class="string">&quot;danny&quot;</span>&#125;,</span><br><span class="line">        <span class="number">3</span>: &amp;Student&#123;Age: <span class="number">17</span>, Name: <span class="string">&quot;andy&quot;</span>&#125;,</span><br><span class="line">    &#125;</span><br><span class="line">    m<span class="selector-attr">[1]</span><span class="selector-class">.Name</span> = &quot;JACK&quot;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-Thread-unsafe"><a href="#2-Thread-unsafe" class="headerlink" title="2) Thread unsafe"></a>2) Thread unsafe</h4><p>Suppose the number of barrels of a map is 4, i.e., B&#x3D;2, and the number of current elements is also 4, i.e., the barrel is full. At this time, two goroutines (g1 and g2) read and write to this map, g1 inserts key1 and g2 reads key2, the following may happen:</p><ol><li><p>g2 calculates the hash value of key2 [1101….. .101], B&#x3D;2, and determines the bucket number to be 1;</p></li><li><p>g1 adds key1, which triggers the expansion condition, increasing B to 3 and expanding the number of buckets to 8. 3;</p></li><li><p>The key in the map starts to migrate, assuming that the migration is completed soon, and key2 is migrated from bucket 1 to bucket 5. 4. g2 is migrated from bucket 1 to bucket 5;</p></li><li><p>g2 traverses from bucket 1 and fails to get data!</p></li></ol><p>So, when manipulating a map, you can <strong>use Go’s own sync.RWMutex locks, or use sync.Map (which supports concurrent and shared locks) to ensure thread safety</strong>.</p><h2 id="7-Postscript"><a href="#7-Postscript" class="headerlink" title="7. Postscript"></a>7. Postscript</h2><p>After answering these questions, the interviewer’s face remained unperturbed, and he said lightly, “Why don’t we move on to talk about channel?”</p><p>So, I disliked all the points I had learned, and the interviewer seemed to recognize my level, so he didn’t give me a hard time anymore and started the next topic.</p><p>After the interview, I couldn’t help but think: “It’s worthy of being a Penguin factory, it seems that the candidates are all strong!”</p><p>Nowadays, in the background of the Internet market is so cold, we want to meet the technical side of the big Internet factories, we must not only have a wide application of the technology we have learned, but also have to spend some effort on the cognition of the underlying mechanism.</p><p><img src="https://p6-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/daf762c3893a4e0196b21f49bb6cf6e3~tplv-k3u1fbpfcp-zoom-in-crop-mark:1512:0:0:0.awebp"></p><p>After all, anyone can memorize the eight-stud text, anyone can brush the algorithmic questions, the interviewer in the screening also had to increase the difficulty, in order to test the candidate’s breadth and depth of knowledge! The gap between you and others may be the distance of a map.</p>]]></content>
    
    
    <summary type="html">Whether it is the usual development, or in the Go language technical interviews, map is very difficult to get around the topic. So, do you understand the underlying implementation mechanism of map?</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Go" scheme="https://www.nablepart.com/tags/Go/"/>
    
    <category term="data structure" scheme="https://www.nablepart.com/tags/data-structure/"/>
    
    <category term="language" scheme="https://www.nablepart.com/tags/language/"/>
    
    <category term="understand" scheme="https://www.nablepart.com/tags/understand/"/>
    
  </entry>
  
  <entry>
    <title>gRPC Response to ChatGPT Streaming Q&amp;A</title>
    <link href="https://www.nablepart.com/01d6bf80beac/"/>
    <id>https://www.nablepart.com/01d6bf80beac/</id>
    <published>2023-11-06T05:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>目录</p><ol><li>初始RPC</li><li>RPC与HTTP</li><li>流行的RPC框架</li><li>Protobuf与gRPC</li><li>gRPC响应ChatGPT问答</li><li>小结</li></ol><h2 id="1-初始RPC"><a href="#1-初始RPC" class="headerlink" title="1. 初始RPC"></a>1. 初始RPC</h2><p><strong>RPC 是什么？</strong></p><blockquote><p>RPC (Remote Procedure Call) is a computer communication protocol. The protocol allows a program running on one computer to call a subroutine in another address space (usually one computer on an open network), and the programmer calls it as if it were a local program, without having to additionally program for this interaction (no need to pay attention to the details). –Wikipedia</p></blockquote><p>In layman’s terms, suppose there are two servers A and B, and two programs (program 1 and program 2) are deployed on these two servers. Since they are two machines, their IP addresses, memory space, etc. are definitely not shared, so how does program 1 call the methods of program 2?</p><p>At this point we need to agree on a protocol to allow applications on the two machines to communicate, RPC is such a protocol, it is through the following steps to allow the two programs to recognize the identity of the other program:</p><ol><li>two machines to send and receive data, so one acts as a server, one acts as a client, they need to ** establish a TCP connection ** (on-demand call, can be a short connection, can also be a long connection);</li><li>before connecting a TCP connection, the client needs to know <strong>the IP address and port number</strong> of the server, where the IP address is a unique identifier of the host in the network and the port number is a unique identifier of the application program (aka process) on the host;</li><li>Before communicating, the server runs the application and listens for the corresponding process port number;</li><li>The client initiates an RPC remote procedure call, passing the parameters of the program interaction to the server, which then transmits them back to the client after processing the received data, disconnects the TCP connection and ends the call.</li></ol><p>The whole process is shown in the following figure:</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e9a2f9a520cd41db834ce8167dabfe02%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><ul><li><p>Client Stub (Client Stub): stores the server address message of the communication and packages the client’s request ** into a message body that can be transmitted in the network;</p></li><li><p>Server Stub (server-side stub): receives messages sent by the client and packages the return results into message bodies that can be transmitted across the network.</p></li><li><p>Sockets (network sockets): a set of program interfaces for applications that can be used to exchange data between different hosts in a network.</p></li></ul><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><h2 id="2-RPC-vs-HTTP"><a href="#2-RPC-vs-HTTP" class="headerlink" title="2. RPC vs. HTTP"></a>2. RPC vs. HTTP</h2><p>**HTTP &amp; RPC, how to choose? **</p><p>After learning about RPC, some people may still wonder: since they are both communication protocols, should we choose HTTP (HyperText Transfer Protocol) or RPC protocol for program interaction and application development?</p><p>This starts with the attributes of both of them. First, the transfer protocol:</p><ul><li>RPC is a communication protocol based on the TCP transport layer or the HTTP2 application layer;</li><li>HTTP is based on HTTP protocols only, including HTTP1.x (i.e. HTTP1.0, 1.1) and HTTP2, and many browsers currently use 1.x by default to access server data.</li></ul><p>Performance consumption (in terms of data type comparison):</p><ul><li>RPC, can be based on gRPC (an RPC framework) to achieve efficient binary transmission;</li><li>HTTP, most of which is implemented through json, byte size and serialization are more performance consuming than gRPC.</li></ul><p>On load balancing:</p><ul><li>RPC, basically comes with a load balancing strategy;</li><li>HTTP, need to configure Nginx, HAProxy to realize.</li></ul><p>Transfer efficiency:</p><ul><li>RPC, the use of custom TCP protocol, you can make the request message size smaller, or use the HTTP2 protocol, can also be very good to reduce the size of the message, improve the transmission efficiency;</li><li>HTTP, if it is based on HTTP1.x protocol, the request will contain a lot of useless content; if it is based on HTTP2.0, then a simple encapsulation can be used as RPC, which is the standard RPC framework is more advantageous for service governance.</li></ul><p>In summary, we can easily find that **RPC from the performance consumption and transmission efficiency, as well as load balancing and other aspects are stronger than HTTP **. At this point, careful friends may have found that, that why our common systems and websites are using the HTTP protocol, do not change to RPC communication?</p><p>To give a common example, HTTP is like Mandarin, RPC is like a local dialect, such as Cantonese, southwest of Yunnan, Guizhou, Sichuan.</p><p>The advantage of speaking Mandarin is that everyone understands it, and most people speak it, so <strong>HTTP has a certain universality</strong>. Speaking dialect, the advantage is that it can be more concise, more confidential, more customizable, the disadvantage is that the other party who “speaks” the dialect (especially the client side) must also understand, and once everyone speaks a dialect, it is difficult to change the dialect. So <strong>RPC is generally used for internal service calls</strong>, such as between service A and service B in the Ali Taobao system.</p><h2 id="3-Popular-RPC-frameworks"><a href="#3-Popular-RPC-frameworks" class="headerlink" title="3. Popular RPC frameworks"></a>3. Popular RPC frameworks</h2><blockquote><p>There are many popular RPC frameworks, here are three common ones.</p></blockquote><ol><li><code>gRPC</code>: gRPC is an open source project announced by Google in 2015, based on the HTTP2.0 protocol, and supports many common programming languages. The HTTP 2.0 protocol is an upgraded version of the binary-based HTTP protocol, which supports features such as multiple concurrent data transfers.</li><li><code>Thrift</code>: Thrift is an internal system cross-language RPC framework developed by Facebook, which was contributed to the Apache Foundation in 2007 and has become one of the many open source projects of Apache.</li><li><code>Dubbo</code>: Dubbo is Alibaba in 2011, an open source RPC framework , in many Internet companies and enterprise applications are widely used , provides a series of protocols and serialization framework , pluggable , but only supports the Java language.</li></ol><p>Foreign RPC evaluation, based on the comparison of the test situation of each RPC framework, from the ** throughput rate, response time and stability **, gRPC comprehensive performance is better, but also a lot of domestic companies in the use of the RPC framework. Moreover, gRPC is realized by the go language, with the popularity of microservices, cloud computing, go language companies and projects are increasing, so gRPC has also become the go language internal system communication choice.</p><p>gRPC based on **ProtoBuf (Protocol Buffer) serialization protocol ** development, its principle is through the IDL (Interface Definition Language, Interface Description Language) file to define the parameters of the service interface and the type of return value, and then through the code generation tool to generate the template of the server and client code. In this way, we only need to implement an IDL file and business interaction code, and then we can use gRPC to communicate.</p><p>A diagram shows the difference between gRPC and HTTP:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/83fe824f83ee4fdea8e7c3446728510f%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h2 id="4-Protobuf-and-gRPC"><a href="#4-Protobuf-and-gRPC" class="headerlink" title="4. Protobuf and gRPC"></a>4. Protobuf and gRPC</h2><h2 id="4-1-Introduction-to-Protobuf"><a href="#4-1-Introduction-to-Protobuf" class="headerlink" title="4.1 Introduction to Protobuf"></a>4.1 Introduction to Protobuf</h2><p>Proto Buffer protocol (protobuf for short, same as below), like json and xml, is a kind of data serialization (serialization, that is, converting a piece of data in memory into binary form, and then transferring or storing it over the network).</p><ul><li>protobuf is a cross-language, cross-platform serialization protocol;</li><li>Not only limited to gRPC, protobuf can also be used for data transfer and storage in other scenarios;</li></ul><p>Unlike json and xml, protobuf needs to define IDL (Data Definition Rules) when using it, which has the advantage of smaller data size and faster transfer speed.</p><p>The transport protocol of gRPC uses protobuf, so we need to learn the rules of writing protobuf files first.</p><h3 id="4-2-Protobuf-Defining-Data-Structures"><a href="#4-2-Protobuf-Defining-Data-Structures" class="headerlink" title="4.2 Protobuf Defining Data Structures"></a>4.2 Protobuf Defining Data Structures</h3><p>Similar to yaml and xml files, protobuf files need to be written in a specific format, and the following is a generalized way to write a protobuf file [gpt.proto]:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/14e2b35a72394ec8b6e25d5cb085a08c%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>In addition to the description in the image above, protobuf files also have some key fields, such as message, which is the most basic type in the protobuf protocol, equivalent to a class object in Java or a struct in Go. As you can see in the figure above, each message has one or more fields and field types, which are equivalent to the parameters and parameter types of an object.</p><p>Once we’ve written the protobuf file, we can start writing the communication logic for gRPC.</p><h3 id="4-3-gRPC-implementation"><a href="#4-3-gRPC-implementation" class="headerlink" title="4.3 gRPC implementation"></a>4.3 gRPC implementation</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/b46af0e783084e20ab6934731e9c14fa%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As we said above, gRPC is a framework for cross-language communication, so the server and client can be different languages. Next, let’s demonstrate the process of implementing gRPC communication in go language.</p><p>Steps:</p><ol><li>write protobuf file</li><li>generate Go code</li><li>write the client side to listen on the port</li><li>write the server side, request data</li></ol><p>First, let’s create a new project, the directory structure is as follows [wecom project, is used to do GPT interaction, you can imitate, the important folders and file names have been circled in red]:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0a377672b75149d6a51620a4d6d2b97a%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h4 id="1-Writing-the-protobuf-file"><a href="#1-Writing-the-protobuf-file" class="headerlink" title="1) Writing the protobuf file"></a>1) Writing the protobuf file</h4><p>According to the above protobuf rules, we first write the protobuf file used in this project [protos&#x2F;gpt&#x2F;gpt.proto].</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">syntax</span> = <span class="string">&quot;proto3&quot;</span></span><br><span class="line">​</span><br><span class="line">option <span class="attr">go_package</span> = <span class="string">&quot;./;gpt&quot;</span></span><br><span class="line">​</span><br><span class="line">package gpt</span><br><span class="line">​</span><br><span class="line">service Greeter &#123;</span><br><span class="line">  rpc GetGPTMessage (GPTRequest) returns (GPTReply) &#123;&#125;</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">message GPTRequest &#123;</span><br><span class="line">  string <span class="attr">content</span> = <span class="number">1</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">message GPTReply &#123;</span><br><span class="line">  string <span class="attr">message</span> = <span class="number">1</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-Generate-Go-code"><a href="#2-Generate-Go-code" class="headerlink" title="2) Generate Go code"></a>2) Generate Go code</h4><p>We need to use protoc tool to generate Go language code, first download the toolkit [according to different computer systems to install proto package]:</p><ul><li><a href="https://link.juejin.cn/?target=https://github.com/protocolbuffers/ProtoBuf/releases/download/v3.20.2/protoc-3.20.2-win64.zip">Windows 64位 点这里下载</a></li><li><a href="https://github.com/protocolbuffers/ProtoBuf/releases/download/v3.20.2/protoc-3.20.2-osx-x86_64.zip">Mac Intel 64位 点这里下载</a></li><li><a href="https://github.com/protocolbuffers/ProtoBuf/releases/download/v3.20.2/protoc-3.20.2-osx-aarch_64.zip%22">Mac ARM 64位 点这里下载</a></li><li><a href="https://github.com/protocolbuffers/ProtoBuf/releases/download/v3.20.2/protoc-3.20.2-linux-x86_64.zip%22">Linux 64位 点这里下载</a></li></ul><p>Then install the golang plugin [generating plugins for other languages such as Java and Python is different, see the official gRPC documentation for details]. [<a href="https://doc.oschina.net/grpc?t=58008)%E3%80%91">https://doc.oschina.net/grpc?t=58008)】</a></p><blockquote><p>go install google.golang.org&#x2F;protobuf&#x2F;cmd&#x2F;protoc-gen-go@latest</p></blockquote><p>This completes the installation of our toolkit.</p><p>After the installation is complete, go to the directory where the proto files are located:</p><blockquote><p>cd protos&#x2F;gpt&#x2F;</p></blockquote><p>Generate code in Go [if protoc does not exist, the protoc toolkit is not installed or there is a problem with the environment variable settings].</p><blockquote><p>protoc –go_out&#x3D;. –go_opt&#x3D;paths&#x3D;source_relative –go-grpc_out&#x3D;. –go-grpc_opt&#x3D;paths&#x3D;source_relative gpt.proto</p></blockquote><p>At this point, there are 3 files in the current directory [protos&#x2F;gpt]:</p><blockquote><p>gpt.pb.go<br>gpt.proto<br>gpt_grpc.pb.go</p></blockquote><h4 id="3-Add-dependencies"><a href="#3-Add-dependencies" class="headerlink" title="3) Add dependencies"></a>3) Add dependencies</h4><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/85233e41d70247159ee0d4c5c509fb2d%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>If, like me, there is code marked in red after generating the code, you can add the dependency in the main directory [&#x2F;wecom]:</p><blockquote><p>go mod tidy</p></blockquote><p>If there is still red after go mod tidy, it may be caused by a lower version of golang’s dependencies, so you need to change golang’s dependencies in go.mod:</p><blockquote><p>go 1.18</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/4f538c58aa5e43f4bc04f5da71e4366a%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h4 id="4-Write-business-code-to-implement-the-server-side"><a href="#4-Write-business-code-to-implement-the-server-side" class="headerlink" title="4) Write business code to implement the server side"></a>4) Write business code to implement the server side</h4><p>First, import the grpc plugin package in your project</p><blockquote><p>go get google.golang.org&#x2F;grpc</p></blockquote><p>Then write the server-side business logic [gpt_server&#x2F;main.go</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line">​</span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">    <span class="string">&quot;context&quot;</span></span><br><span class="line">    <span class="string">&quot;flag&quot;</span></span><br><span class="line">    <span class="string">&quot;fmt&quot;</span></span><br><span class="line">    <span class="string">&quot;google.golang.org/grpc&quot;</span></span><br><span class="line">    <span class="string">&quot;log&quot;</span></span><br><span class="line">    <span class="string">&quot;net&quot;</span></span><br><span class="line">    pb <span class="string">&quot;wecom/protos/gpt&quot;</span></span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="keyword">var</span> (</span><br><span class="line">    port = flag.Int(<span class="string">&quot;port&quot;</span>, <span class="number">50051</span>, <span class="string">&quot;port&quot;</span>)</span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="keyword">type</span> server <span class="keyword">struct</span>&#123;</span><br><span class="line">    pb.UnimplementedGreeterServer</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(s *server)</span></span> GetGPTMessage(ctx context.Context, in *pb.GPTRequest) (*pb.GPTReply, <span class="type">error</span>) &#123;</span><br><span class="line">    <span class="keyword">return</span> &amp;pb.GPTReply&#123;Message: <span class="string">&quot;gpt response&quot;</span>&#125;, <span class="literal">nil</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    flag.Parse()</span><br><span class="line">    list, err := net.Listen(<span class="string">&quot;tcp&quot;</span>, fmt.Sprintf(<span class="string">&quot;:%d&quot;</span>, *port))</span><br><span class="line">    <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">        log.Fatalf(<span class="string">&quot;listen failed, %v&quot;</span>, err)</span><br><span class="line">    &#125;</span><br><span class="line">    s := grpc.NewServer()</span><br><span class="line"></span><br><span class="line">    pb.RegisterGreeterServer(s, &amp;server&#123;&#125;)</span><br><span class="line">    log.Printf(<span class="string">&quot;listen success, %v&quot;</span>, list.Addr())</span><br><span class="line">    <span class="keyword">if</span> err := s.Serve(list); err != <span class="literal">nil</span> &#123;</span><br><span class="line">        log.Fatalf(<span class="string">&quot;server failed, %v&quot;</span>, err)</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Run the main function:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/069a32e234904313bd4211b84f4618c7%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Next, we implement another client binding to a port to request messages from the server.</p><h4 id="5-Client-side-Logic"><a href="#5-Client-side-Logic" class="headerlink" title="5) Client-side Logic"></a>5) Client-side Logic</h4><p>From the server-side implementation above, the gRPC implementation is very simple, just follow the template generated by protobuf to fill in the business code! This process, we only need to focus on the server-side and client-side connection communication, and their connection is not very deep, and our HTTP listening and binding is the same principle.</p><p>Client-side business code [gpt_client&#x2F;main.go</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line">​</span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">   <span class="string">&quot;context&quot;</span></span><br><span class="line">   pb <span class="string">&quot;dm-lite/resource/proto/gpt&quot;</span></span><br><span class="line">   <span class="string">&quot;flag&quot;</span></span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc&quot;</span></span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc/credentials/insecure&quot;</span></span><br><span class="line">   <span class="string">&quot;log&quot;</span></span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="keyword">const</span> defaultName = <span class="string">&quot;world&quot;</span></span><br><span class="line">​</span><br><span class="line"><span class="keyword">var</span> (</span><br><span class="line">   addr = flag.String(<span class="string">&quot;addr&quot;</span>, <span class="string">&quot;localhost:50051&quot;</span>, <span class="string">&quot;&quot;</span>)</span><br><span class="line">   name = flag.String(<span class="string">&quot;name&quot;</span>, defaultName, <span class="string">&quot;&quot;</span>)</span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">   flag.Parse()</span><br><span class="line">   conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(insecure.NewCredentials()))</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;Dial failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">defer</span> conn.Close()</span><br><span class="line">   c := pb.NewGreeterClient(conn)</span><br><span class="line">   ctx := context.Background()</span><br><span class="line">   r, err := c.GetGPTMessage(ctx, &amp;pb.GPTRequest&#123;</span><br><span class="line">      Content: <span class="string">&quot;hello&quot;</span>,</span><br><span class="line">   &#125;)</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;GetGPTMessage failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">​</span><br><span class="line">   log.Printf(<span class="string">&quot;get reply: %v&quot;</span>, r.GetMessage())</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>When the server is listening on port 50051, we can run the client to call the <code>GetGPTMessage</code> method of gRPC. Run the client main function to get the result:<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/1a7878cf680c40d9ae64a8da71edd358%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>POTUS, the gRPC interface call was successful!</p><h2 id="5-gRPC-response-to-ChatGPT-Q-A"><a href="#5-gRPC-response-to-ChatGPT-Q-A" class="headerlink" title="5. gRPC response to ChatGPT Q&amp;A"></a>5. gRPC response to ChatGPT Q&amp;A</h2><p><strong>Streaming RPC</strong></p><p>The above implements the gRPC interface for real-time response, i.e.: a simple pattern of one question and one answer. If compared to a scenario in an interview, the simple pattern looks like this:</p><blockquote><p>(Interviewer) Q: Do you know gRPC?<br>(Candidate) A: Yes, gRPC is an RPC framework initiated by Google;<br>(Interviewer) Q: What else?<br>(Candidate) A: gRPC is based on HTTP&#x2F;2 protocol transport;<br>(Interviewer) Q: What else?<br>(Candidate) A: It uses Protocol Buffers as the interface description language;<br>(Interviewer) Q: Can we finish this at once?<br>(Candidate) A: ……</p></blockquote><p>Then the interviewer asked us to do it, so we can’t afford not to do it, so the streaming pattern RPC appears:</p><blockquote><p>(Interviewer) Q: Do you know gRPC?<br>(Candidate) A: Yes, gRPC is an RPC framework initiated by Google… It is based on HTTP&#x2F;2 protocol transport… And it uses Protocol Buffers as the interface description language.<br>(Interviewer) Thinks to himself: not bad! I’m more of a stick-in-the-mud kind of guy than a toothpaste-squeezing kind of guy!</p></blockquote><p>Next, we add a client-side streaming RPC interface to the proto file:</p><blockquote></blockquote><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rpc GetGPTStreamData (GPTRequest) returns (stream GPTReply) &#123;&#125;</span><br></pre></td></tr></table></figure><h3 id="5-1-添加流式接口"><a href="#5-1-添加流式接口" class="headerlink" title="5.1 添加流式接口"></a>5.1 添加流式接口</h3><p>Improvement of protobuf files [gpt&#x2F;gpt.proto</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">syntax</span> = <span class="string">&quot;proto3&quot;</span></span><br><span class="line">​</span><br><span class="line">option <span class="attr">go_package</span> = <span class="string">&quot;./;gpt&quot;</span></span><br><span class="line">​</span><br><span class="line">package gpt</span><br><span class="line">​</span><br><span class="line">service Greeter &#123;</span><br><span class="line">  rpc GetGPTMessage (GPTRequest) returns (GPTReply) &#123;&#125;</span><br><span class="line">  rpc GetGPTStreamData (GPTRequest) returns (stream GPTReply) &#123;&#125;</span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">message GPTRequest &#123;</span><br><span class="line">  string <span class="attr">content</span> = <span class="number">1</span></span><br><span class="line">&#125;</span><br><span class="line">​</span><br><span class="line">message GPTReply &#123;</span><br><span class="line">  string <span class="attr">message</span> = <span class="number">1</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Following the three-step strategy for gRPC development, we’ll start by generating template Go code from a proto file:</p><blockquote><p>protoc –go_out&#x3D;. –go_opt&#x3D;paths&#x3D;source_relative –go-grpc_out&#x3D;. –go-grpc_opt&#x3D;paths&#x3D;source_relative gpt.proto</p></blockquote><h3 id="5-2-Server-side"><a href="#5-2-Server-side" class="headerlink" title="5.2 Server side"></a>5.2 Server side</h3><p>Add streaming server-side logic [gpt_server&#x2F;main.go], note that the following code is a new addition, not an override:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(s *server)</span></span> GetGPTStreamData(in *pb.GPTRequest, gptStream pb.Greeter_GetGPTStreamDataServer) <span class="type">error</span> &#123;</span><br><span class="line">   log.Printf(<span class="string">&quot;GetGPTStreamData Request: %v&quot;</span>, in.GetContent())</span><br><span class="line">   messages := []<span class="type">string</span>&#123;</span><br><span class="line">      <span class="string">&quot;春眠不觉晓&quot;</span>,</span><br><span class="line">      <span class="string">&quot;处处闻啼鸟&quot;</span>,</span><br><span class="line">      <span class="string">&quot;夜来风雨声&quot;</span>,</span><br><span class="line">      <span class="string">&quot;花落知多少&quot;</span>,</span><br><span class="line">   &#125;</span><br><span class="line">​</span><br><span class="line">   log.Println(<span class="string">&quot;Send reply:&quot;</span>)</span><br><span class="line">   <span class="keyword">for</span> _, msg := <span class="keyword">range</span> messages &#123;</span><br><span class="line"></span><br><span class="line">      <span class="keyword">if</span> err := gptStream.Send(&amp;pb.GPTReply&#123;</span><br><span class="line">         Message: msg,</span><br><span class="line">      &#125;); err != <span class="literal">nil</span> &#123;</span><br><span class="line">         log.Printf(<span class="string">&quot;Send error, %v&quot;</span>, err)</span><br><span class="line">         <span class="keyword">return</span> err</span><br><span class="line">      &#125;</span><br><span class="line">      time.Sleep(<span class="number">1</span> * time.Second)</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">return</span> <span class="literal">nil</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>First, start listening on the server side:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e6aa9aa617a140bda4f4e1b7a03e52ff%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h3 id="5-3-Client"><a href="#5-3-Client" class="headerlink" title="5.3 Client"></a>5.3 Client</h3><p>Streaming message reception, client code [gpt_client&#x2F;main.go</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line">​</span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">   <span class="string">&quot;context&quot;</span></span><br><span class="line">   <span class="string">&quot;flag&quot;</span></span><br><span class="line">   <span class="string">&quot;fmt&quot;</span></span><br><span class="line">   <span class="string">&quot;io&quot;</span></span><br><span class="line">   <span class="string">&quot;log&quot;</span></span><br><span class="line">   <span class="string">&quot;time&quot;</span></span><br><span class="line">   pb <span class="string">&quot;wecom/protos/gpt&quot;</span></span><br><span class="line">​</span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc&quot;</span></span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc/credentials/insecure&quot;</span></span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="keyword">const</span> defaultName = <span class="string">&quot;world&quot;</span></span><br><span class="line">​</span><br><span class="line"><span class="keyword">var</span> (</span><br><span class="line">   addr = flag.String(<span class="string">&quot;addr&quot;</span>, <span class="string">&quot;localhost:50051&quot;</span>, <span class="string">&quot;&quot;</span>)</span><br><span class="line">   name = flag.String(<span class="string">&quot;name&quot;</span>, defaultName, <span class="string">&quot;&quot;</span>)</span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">   flag.Parse()</span><br><span class="line">   conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(insecure.NewCredentials()))</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;Dial failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">defer</span> conn.Close()</span><br><span class="line">   c := pb.NewGreeterClient(conn)</span><br><span class="line">   ctx, cancel := context.WithTimeout(context.Background(), <span class="number">60</span>*time.Second)</span><br><span class="line">   <span class="keyword">defer</span> cancel()</span><br><span class="line">   steam, err := c.GetGPTStreamData(ctx, &amp;pb.GPTRequest&#123;</span><br><span class="line">      Content: <span class="string">&quot;背一下古诗《春眠》&quot;</span>,</span><br><span class="line">   &#125;)</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;GetGPTMessage failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">   log.Println(<span class="string">&quot;Get reply:&quot;</span>)</span><br><span class="line">   <span class="keyword">for</span> &#123;</span><br><span class="line">      res, err := steam.Recv()</span><br><span class="line">      <span class="keyword">if</span> err == io.EOF &#123;</span><br><span class="line">         <span class="keyword">break</span></span><br><span class="line">      &#125;</span><br><span class="line">      <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">         log.Fatalf(<span class="string">&quot;Recv failed, %v&quot;</span>, err)</span><br><span class="line">      &#125;</span><br><span class="line">      fmt.Printf(<span class="string">&quot;%v&quot;</span>, res.GetMessage())</span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Start the main function of the server and client, the code runs as follows:</p><p>&#x2F; <em>Video not supported</em>&#x2F;</p><p>OK, the streaming gRPC responds successfully.</p><p>Since the project is doing ChatGPT recently, some users will use the streaming response Q&amp;A, so we call the streaming Q&amp;A interface of ChatGPT next to show the daily use scenario of the streaming interface.</p><h3 id="5-4-GPT-Streaming-Q-A-Demonstration"><a href="#5-4-GPT-Streaming-Q-A-Demonstration" class="headerlink" title="5.4 GPT Streaming Q&amp;A Demonstration"></a>5.4 GPT Streaming Q&amp;A Demonstration</h3><h4 id="1-Server-side-logic"><a href="#1-Server-side-logic" class="headerlink" title="1) Server-side logic"></a>1) Server-side logic</h4><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(s *RPCServe)</span></span> GetGPTStreamData(in *pb.GPTRequest, gptStream pb.Greeter_GetGPTStreamDataServer) <span class="type">error</span> &#123;</span><br><span class="line">   log.Printf(<span class="string">&quot;GetGPTStreamData Request: %v&quot;</span>, in.GetContent())</span><br><span class="line">   client := openai.NewClient(OPENAI_API_KEY)</span><br><span class="line">   ctx := context.Background()</span><br><span class="line">​</span><br><span class="line"></span><br><span class="line">   req := openai.ChatCompletionRequest&#123;</span><br><span class="line">      Model:     openai.GPT3Dot5Turbo,</span><br><span class="line">      MaxTokens: <span class="number">2048</span>,</span><br><span class="line">      Messages: []openai.ChatCompletionMessage&#123;</span><br><span class="line">         &#123;</span><br><span class="line">            Role:    openai.ChatMessageRoleUser,</span><br><span class="line">            Content: in.GetContent(),</span><br><span class="line">         &#125;,</span><br><span class="line">      &#125;,</span><br><span class="line">      Stream: <span class="literal">true</span>,</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">   stream, err := client.CreateChatCompletionStream(ctx, req)</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;ChatCompletion failed, %v&quot;</span>, err)</span><br><span class="line">      <span class="keyword">return</span> err</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">defer</span> stream.Close()</span><br><span class="line">​</span><br><span class="line">   log.Println(<span class="string">&quot;Send reply:&quot;</span>)</span><br><span class="line">   <span class="keyword">for</span> &#123;</span><br><span class="line">      response, err := stream.Recv()</span><br><span class="line"></span><br><span class="line">      <span class="keyword">if</span> errors.Is(err, io.EOF) &#123;</span><br><span class="line">         log.Printf(<span class="string">&quot;Stream finished&quot;</span>)</span><br><span class="line">         <span class="keyword">break</span></span><br><span class="line">      &#125;</span><br><span class="line">​</span><br><span class="line">      <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">         log.Fatalf(<span class="string">&quot;Stream error, %v&quot;</span>, err)</span><br><span class="line">         <span class="keyword">return</span> err</span><br><span class="line">      &#125;</span><br><span class="line">​</span><br><span class="line"></span><br><span class="line">      data := &amp;pb.GPTReply&#123;</span><br><span class="line">         Message: response.Choices[<span class="number">0</span>].Delta.Content,</span><br><span class="line">      &#125;</span><br><span class="line"></span><br><span class="line">      <span class="keyword">if</span> err := gptStream.Send(data); err != <span class="literal">nil</span> &#123;</span><br><span class="line">         log.Printf(<span class="string">&quot;Send error, %v&quot;</span>, err)</span><br><span class="line">         <span class="keyword">return</span> err</span><br><span class="line">      &#125;</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">return</span> <span class="literal">nil</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-Client-Logic"><a href="#2-Client-Logic" class="headerlink" title="2) Client Logic"></a>2) Client Logic</h4><p>Streaming message reception, client code [gpt_client&#x2F;main.go</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line">​</span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">   <span class="string">&quot;context&quot;</span></span><br><span class="line">   pb <span class="string">&quot;dm-lite/resource/proto/gpt&quot;</span></span><br><span class="line">   <span class="string">&quot;flag&quot;</span></span><br><span class="line">   <span class="string">&quot;fmt&quot;</span></span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc&quot;</span></span><br><span class="line">   <span class="string">&quot;google.golang.org/grpc/credentials/insecure&quot;</span></span><br><span class="line">   <span class="string">&quot;io&quot;</span></span><br><span class="line">   <span class="string">&quot;log&quot;</span></span><br><span class="line">   <span class="string">&quot;time&quot;</span></span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="keyword">const</span> defaultName = <span class="string">&quot;world&quot;</span></span><br><span class="line">​</span><br><span class="line"><span class="keyword">var</span> (</span><br><span class="line">   addr = flag.String(<span class="string">&quot;addr&quot;</span>, <span class="string">&quot;localhost:50051&quot;</span>, <span class="string">&quot;&quot;</span>)</span><br><span class="line">   name = flag.String(<span class="string">&quot;name&quot;</span>, defaultName, <span class="string">&quot;&quot;</span>)</span><br><span class="line">)</span><br><span class="line">​</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">   flag.Parse()</span><br><span class="line">   conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(insecure.NewCredentials()))</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;Dial failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">   <span class="keyword">defer</span> conn.Close()</span><br><span class="line">   c := pb.NewGreeterClient(conn)</span><br><span class="line">   ctx, cancel := context.WithTimeout(context.Background(), <span class="number">60</span>*time.Second)</span><br><span class="line">   <span class="keyword">defer</span> cancel()</span><br><span class="line">   steam, err := c.GetGPTStreamData(ctx, &amp;pb.GPTRequest&#123;</span><br><span class="line">      Content: <span class="string">&quot;写一篇500字的作文，题目为&quot;</span>梦想<span class="string">&quot;&quot;</span>,</span><br><span class="line">   &#125;)</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      log.Fatalf(<span class="string">&quot;GetGPTMessage failed, %v&quot;</span>, err)</span><br><span class="line">   &#125;</span><br><span class="line">   log.Println(<span class="string">&quot;Get reply:&quot;</span>)</span><br><span class="line">   <span class="keyword">for</span> &#123;</span><br><span class="line">      res, err := steam.Recv()</span><br><span class="line">      <span class="keyword">if</span> err == io.EOF &#123;</span><br><span class="line">         <span class="keyword">break</span></span><br><span class="line">      &#125;</span><br><span class="line">      <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">         log.Fatalf(<span class="string">&quot;Recv failed, %v&quot;</span>, err)</span><br><span class="line">      &#125;</span><br><span class="line">      fmt.Printf(<span class="string">&quot;%v&quot;</span>, res.GetMessage())</span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Start the main function of the server and client, the code runs as follows:</p><p>&#x2F; <em>Video not supported</em>&#x2F;</p><p>It is not difficult to find that in the above Q&amp;A scenarios if there is no streaming response, the interface in the case of slow return, it will greatly affect the user experience. That’s why some people say that love is a fine line, have you realized it?</p><blockquote><p>The above code address: [github.com&#x2F;yangfx15&#x2F;rp…] (<a href="https://github.com/yangfx15/rpc_test">https://github.com/yangfx15/rpc_test</a>)</p></blockquote><h2 id="6-Summary"><a href="#6-Summary" class="headerlink" title="6. Summary"></a>6. Summary</h2><p>In this paper, we talked about RPC, mentioned the basic concepts of RPC and the difference between HTTP communication protocols, and the commonly used RPC communication frameworks. Then, we wrote a protobuf file according to the characteristics of gRPC and ran a simple gRPC communication program. Finally, since the project uses ChatGPT, we used the characteristics of gRPC streaming response and ChatGPT to make a simple streaming Q&amp;A demo.</p><p>In this process, it is easy to realize that the interaction between gRPC and HTTP is very similar, but the advantage of gRPC is that the packet size is smaller and the communication is faster.</p><p>Therefore, gRPC is a very efficient way to interact with internal systems that communicate frequently. And it is also open-sourced by Google, just like go, so the community is very active, and there are perfect solutions to the common problems you usually encounter.</p>]]></content>
    
    
    <summary type="html">RPC (Remote Procedure Call) is a computer communication protocol that allows a program running on one computer to call a subroutine in another address space without the programmer having to pay attention to the details by calling it as if it were a local program.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="ChatGPT" scheme="https://www.nablepart.com/tags/ChatGPT/"/>
    
    <category term="RPC" scheme="https://www.nablepart.com/tags/RPC/"/>
    
    <category term="Remote" scheme="https://www.nablepart.com/tags/Remote/"/>
    
    <category term="programmer" scheme="https://www.nablepart.com/tags/programmer/"/>
    
    <category term="communication" scheme="https://www.nablepart.com/tags/communication/"/>
    
  </entry>
  
  <entry>
    <title>Go Language Error Code Design and Management Practices</title>
    <link href="https://www.nablepart.com/5aaa1bda9835/"/>
    <id>https://www.nablepart.com/5aaa1bda9835/</id>
    <published>2023-11-06T04:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-Background"><a href="#1-1-Background" class="headerlink" title="1.1 Background"></a>1.1 Background</h2><p>Recently, I’ve been working on a service that directly interacts with front-end and third-party platforms (which can be simply understood as other departments of the company or client software), involving modules such as user registration, login, data processing, etc. The architecture diagram is roughly as follows. The architecture diagram is roughly as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/b7063229b44642e3877adef40d200b0b%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>After getting the requirements, combined with the team’s internal familiar technology stack, we determined that the backend service [business logic layer] using Golang language to develop, the framework used are Gin to do HTTP interaction, Swaggo automatic generation of interface documents, Redis and MySQL as K-V and DB storage.</p><p>It is worth noting that the application requires us to <strong>specify and normalize the errors on the third-party platform and the Web side</strong>, for example: the error code information on the Web side is also available to the third-party platform.</p><p>Therefore, the design and management of error code specification becomes our primary problem.</p><h2 id="1-2-Features"><a href="#1-2-Features" class="headerlink" title="1.2 Features"></a>1.2 Features</h2><p>The Go language provides a simple error handling mechanism: <code>error &amp;#x7C7B;&amp;#x578B;</code>. error is an interface type defined as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> <span class="type">error</span> <span class="keyword">interface</span> &#123;</span><br><span class="line">    Error() <span class="type">string</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The use of error can be seen everywhere in the code, e.g., the database triple Gorm auto-incrementing tables, Gin getting parameters, etc:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(db *DB)</span></span> AutoMigrate(dst ...<span class="keyword">interface</span>&#123;&#125;) <span class="type">error</span> &#123;</span><br><span class="line">    <span class="keyword">return</span> db.Migrator().AutoMigrate(dst...)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/6382ed53d355471e93791df4ba693e12%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>In addition to Go itself and the use of three-way packages, we can also implement specific error messages through <code>errors.New()</code>:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">func <span class="selector-tag">div</span>(<span class="selector-tag">a</span>, <span class="selector-tag">b</span> int) (<span class="attribute">float</span>, error) &#123;</span><br><span class="line">   if <span class="selector-tag">b</span> == <span class="number">0</span> &#123;</span><br><span class="line">       return <span class="number">0</span>, errrors<span class="selector-class">.New</span>(&quot;除数不能为<span class="number">0</span>&quot;)</span><br><span class="line">  &#125;</span><br><span class="line">   return <span class="attribute">float</span>(<span class="selector-tag">a</span>)/<span class="attribute">float</span>(<span class="selector-tag">b</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>However, a new problem arises.</p><p>If we define the same error with a similar <code>errors.New()</code> definition every time we encounter it, we will not only have a lot of duplicate code, but it will also be very difficult to sort out our error messages for web development or third party platforms. Not only would there be a lot of duplicate code, but it would also be very difficult to comb through our error messages to web-side development or third-party platforms.</p><p>**Imagine 100,000 lines of code, and it would be more or less unseemly to go through them one by one looking for <code>errors.New()</code> information! **</p><h1 id="2-Defining-Error-Codes-and-Messages"><a href="#2-Defining-Error-Codes-and-Messages" class="headerlink" title="2. Defining Error Codes and Messages"></a>2. Defining Error Codes and Messages</h1><h2 id="2-1-Error-Code-Design-Specifications"><a href="#2-1-Error-Code-Design-Specifications" class="headerlink" title="2.1 Error Code Design Specifications"></a>2.1 Error Code Design Specifications</h2><p>So we thought of unifying error messages and uniquely identifying them with error codes. That is: ** an error code corresponds to an error message **, every time you need it, just use the error code directly.</p><p>The error code in the industry adopts <code>5~7</code> bit integer (space-saving) constants to define, so we adopt <strong>5 bit numeric error code, Chinese error message</strong>, and divide the range of error code according to the business module.</p><h3 id="Module-Description"><a href="#Module-Description" class="headerlink" title="Module Description"></a>Module Description</h3><p>Module description 1<strong><strong>1 starts with service level error code, such as internal service error, wrong parameter information, etc. 2</strong></strong>2 starts with: business module level error code 201<em><strong>201 starts with error code of dataset module 202</strong></em>202: user management module 203***203: pre-training management module</p><h2 id="2-2-Error-code-definition"><a href="#2-2-Error-code-definition" class="headerlink" title="2.2 Error code definition"></a>2.2 Error code definition</h2><p>Create a new <code>err_code</code> package with a new <code>error_handle.go</code> file:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> err_code</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> <span class="string">&quot;github.com/pkg/errors&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> Response <span class="keyword">struct</span> &#123;</span><br><span class="line">    Code      ErrCode <span class="string">`json:&quot;code&quot;`</span>       </span><br><span class="line">    Msg       <span class="type">string</span>  <span class="string">`json:&quot;msg&quot;`</span>           </span><br><span class="line">    RequestId <span class="type">string</span>  <span class="string">`json:&quot;request_id&quot;`</span> </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Added error codes and error messages:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">type ErrCode <span class="type">int</span> </span><br><span class="line"></span><br><span class="line"><span class="title function_">const</span> <span class="params">(</span></span><br><span class="line"><span class="params"></span></span><br><span class="line"><span class="params">ServerError    ErrCode = <span class="number">10001</span></span></span><br><span class="line"><span class="params">ParamBindError ErrCode = <span class="number">10002</span></span></span><br><span class="line"><span class="params"></span></span><br><span class="line"><span class="params">IllegalDatasetName ErrCode = <span class="number">20101</span> </span></span><br><span class="line"><span class="params">ParamNameError     ErrCode = <span class="number">20102</span> </span></span><br><span class="line"><span class="params"></span></span><br><span class="line"><span class="params">IllegalPhoneNum         ErrCode = <span class="number">20201</span> </span></span><br><span class="line"><span class="params">IllegalVerifyCode       ErrCode = <span class="number">20202</span> </span></span><br><span class="line"><span class="params">PhoneRepeatedRegistered ErrCode = <span class="number">20203</span> </span></span><br><span class="line"><span class="params">PhoneIsNotRegistered    ErrCode = <span class="number">20204</span> </span></span><br><span class="line"><span class="params">PhoneRepeatedApproved   ErrCode = <span class="number">20205</span> </span></span><br><span class="line"><span class="params">PhoneIsNotApproved      ErrCode = <span class="number">20206</span> </span></span><br><span class="line"><span class="params"></span></span><br><span class="line"><span class="params">IllegalModelName <span class="number">20301</span> </span></span><br><span class="line"><span class="params">)</span></span><br></pre></td></tr></table></figure><h2 id="2-2-Map-Mapping-Error-Messages"><a href="#2-2-Map-Mapping-Error-Messages" class="headerlink" title="2.2 Map Mapping Error Messages"></a>2.2 Map Mapping Error Messages</h2><p>Based on the error code, we use Map mapping to define the <strong>Chinese error message</strong>:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">var</span> errorMsg = <span class="keyword">map</span>[<span class="type">int</span>]<span class="type">string</span>&#123;</span><br><span class="line">ServerError:          <span class="string">&quot;服务内部错误&quot;</span>,</span><br><span class="line">ParamBindError:     <span class="string">&quot;参数信息有误&quot;</span>,</span><br><span class="line">IllegalDatasetName: <span class="string">&quot;无效的数据集名称&quot;</span>,</span><br><span class="line">ParamNameError:     <span class="string">&quot;参数name错误&quot;</span>,</span><br><span class="line">IllegalPhoneNum:    <span class="string">&quot;手机号格式不正确&quot;</span>,</span><br><span class="line">IllegalModelName:   <span class="string">&quot;非法模型名称&quot;</span>,</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Text</span><span class="params">(code <span class="type">int</span>)</span></span> <span class="type">string</span> &#123;</span><br><span class="line">    <span class="keyword">return</span> errorMsg[code]</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">NewCustomError</span><span class="params">(code ErrCode)</span></span> <span class="type">error</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">return</span> errors.Wrap(&amp;Response&#123;</span><br><span class="line">        Code: code,</span><br><span class="line">        Msg:  code.String(),</span><br><span class="line">    &#125;, <span class="string">&quot;&quot;</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Use the error code information:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">CheckMobile</span><span class="params">(phone <span class="type">string</span>)</span></span> <span class="type">bool</span> &#123;</span><br><span class="line"></span><br><span class="line">regRuler := <span class="string">&quot;^1[345789]&#123;1&#125;\d&#123;9&#125;$&quot;</span></span><br><span class="line"></span><br><span class="line">reg := regexp.MustCompile(regRuler)</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> reg.MatchString(phone)</span><br><span class="line"></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">savePhoneNum</span><span class="params">(phone <span class="type">string</span>)</span></span> <span class="type">error</span> &#123;</span><br><span class="line">   <span class="keyword">if</span> phone == <span class="string">&quot;&quot;</span> || !CheckMobile(phone) &#123;</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> NewCustomError(err_code.IllegalPhoneNum)</span><br><span class="line">&#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>In this way, our error code mechanism is effectively set up, with the benefits of:</p><ul><li>Solve the problem of difficult to manage error information: all in a <code>err_code</code> package, at a glance you can know what error information the service has, ** easy to collect and error localization **;</li><li>solved the problem of uneven error code, arbitrary definition: according to the business module divided into different numerical ranges of error code, ** according to the error code you can know which module is the problem, to avoid tearing the skin **;</li></ul><p>However, some smart and studious friends may have found it. Every time you define a new error code, you need to add the error code number and Map mapping error information, is there a more concise way to define it?</p><p>The answer is yes! As a programmer who often tries to be lazy, <strong>simple and efficient automation</strong> is the goal we are pursuing.</p><h1 id="3-Automated-generation-of-error-codes-and-error-messages"><a href="#3-Automated-generation-of-error-codes-and-error-messages" class="headerlink" title="3. Automated generation of error codes and error messages"></a>3. Automated generation of error codes and error messages</h1><h2 id="3-1-stringer"><a href="#3-1-stringer" class="headerlink" title="3.1 stringer"></a>3.1 stringer</h2><p><code>stringer</code> is a toolkit open-sourced for the Go language, and the installation command is:</p><blockquote><p>go install golang.org&#x2F;x&#x2F;tools&#x2F;cmd&#x2F;stringer</p></blockquote><p>In addition to the toolkit, we also need Go’s <code>iota</code> counter for automatic accumulation of constant numbers:</p><blockquote><p>PS: <code>iota</code> is the <strong>go language’s constant counter</strong> and can only be used in constant expressions.<br>Its value starts at 0, and grows by 1 for each new line in const. iota increases its value by 1 until it encounters the next const keyword, when its value is reset to 0.</p></blockquote><h2 id="3-2-Defining-Error-Messages"><a href="#3-2-Defining-Error-Messages" class="headerlink" title="3.2 Defining Error Messages"></a>3.2 Defining Error Messages</h2><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> err_code</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> <span class="string">&quot;github.com/pkg/errors&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> Response <span class="keyword">struct</span> &#123;</span><br><span class="line">  Code      ErrCode <span class="string">`json:&quot;code&quot;`</span>       </span><br><span class="line">  Msg       <span class="type">string</span>  <span class="string">`json:&quot;msg&quot;`</span>        </span><br><span class="line">  RequestId <span class="type">string</span>  <span class="string">`json:&quot;request_id&quot;`</span> </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(e *Response)</span></span> Error() <span class="type">string</span> &#123;</span><br><span class="line">  <span class="keyword">return</span> e.Code.String()</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> ErrCode <span class="type">int</span> </span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line"></span><br><span class="line">ServerError     ErrCode = <span class="literal">iota</span> + <span class="number">10001</span> </span><br><span class="line">ParamBindError                         </span><br><span class="line">TokenAuthFail                          </span><br><span class="line">TokenIsNotExist                        </span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line"></span><br><span class="line">IllegalDatasetName ErrCode = <span class="literal">iota</span> + <span class="number">20101</span> </span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line"></span><br><span class="line">IllegalPhoneNum         ErrCode = <span class="literal">iota</span> + <span class="number">20201</span> </span><br><span class="line">IllegalVerifyCode                              </span><br><span class="line">PhoneRepeatedRegistered                        </span><br><span class="line">PhoneIsNotRegistered                           </span><br><span class="line">PhoneRepeatedApproved                          </span><br><span class="line">PhoneIsNotApproved                             </span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line"></span><br><span class="line">IllegalModelName ErrCode = <span class="literal">iota</span> + <span class="number">20301</span> </span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">NewCustomError</span><span class="params">(code ErrCode)</span></span> <span class="type">error</span> &#123;</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> errors.Wrap(&amp;Response&#123;</span><br><span class="line">Code: code,</span><br><span class="line">Msg:  code.String(),</span><br><span class="line">&#125;, <span class="string">&quot;&quot;</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>With the above definition of the error code <strong>const constant + error code name + error message comment</strong>, where <code>iota</code> is automatically constant-accumulated.</p><p>I.e. <code>ParamBindError</code> is <code>10002</code> and <code>TokenAuthFail</code> is <code>10003</code>:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">const</span> (</span><br><span class="line"></span><br><span class="line">   ServerError     ErrCode = <span class="literal">iota</span> + <span class="number">10001</span></span><br><span class="line">   ParamBindError</span><br><span class="line">   TokenAuthFail</span><br><span class="line">   TokenIsNotExist</span><br><span class="line">)</span><br></pre></td></tr></table></figure><p>There are two ways we can generate error messages for error code mapping.</p><h3 id="1-Run-the-stringer-utility-in-Goland"><a href="#1-Run-the-stringer-utility-in-Goland" class="headerlink" title="1) Run the stringer utility in &#96;Goland"></a>1) Run the <code>stringer</code> utility in &#96;Goland</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/8541198b92fd426893da327ff1bdd3f2%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><h3 id="2-Execute-the-command-to-run-the-stringer-utility"><a href="#2-Execute-the-command-to-run-the-stringer-utility" class="headerlink" title="2) Execute the command to run the stringer utility"></a>2) Execute the command to run the <code>stringer</code> utility</h3><p>We run the following command on the <code>err_code/error_handle.go</code> file:</p><blockquote><p>go generate internal&#x2F;protocols&#x2F;err_code&#x2F;error_handle.go</p></blockquote><p>This generates a new <code>errcode_string.go</code> file with a mapping of <code>err_code</code> to <code>err_msg</code>:</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br></pre></td><td class="code"><pre><span class="line">// Code generated by &quot;stringer -type ErrCode -linecomment&quot;</span><br><span class="line"></span><br><span class="line">package err_code</span><br><span class="line"></span><br><span class="line">import &quot;strconv&quot;</span><br><span class="line"></span><br><span class="line">func _() &#123;</span><br><span class="line">   // An &quot;invalid array index&quot; compiler error signifies that the constant values have changed.</span><br><span class="line"></span><br><span class="line">   // Re-run the stringer command to generate them again.</span><br><span class="line"></span><br><span class="line">   var x <span class="section">[1]</span>struct&#123;&#125;</span><br><span class="line">   <span class="attr">_</span> = x[ServerError-<span class="number">10001</span>]</span><br><span class="line">   <span class="attr">_</span> = x[ParamBindError-<span class="number">10002</span>]</span><br><span class="line">   <span class="attr">_</span> = x[TokenAuthFail-<span class="number">10003</span>]</span><br><span class="line">   <span class="attr">_</span> = x[TokenIsNotExist-<span class="number">10004</span>]</span><br><span class="line">   <span class="attr">_</span> = x[IllegalDatasetName-<span class="number">20101</span>]</span><br><span class="line">   <span class="attr">_</span> = x[IllegalPhoneNum-<span class="number">20201</span>]</span><br><span class="line">   <span class="attr">_</span> = x[IllegalVerifyCode-<span class="number">20202</span>]</span><br><span class="line">   <span class="attr">_</span> = x[PhoneRepeatedRegistered-<span class="number">20203</span>]</span><br><span class="line">   <span class="attr">_</span> = x[PhoneIsNotRegistered-<span class="number">20204</span>]</span><br><span class="line">   <span class="attr">_</span> = x[PhoneRepeatedApproved-<span class="number">20205</span>]</span><br><span class="line">   <span class="attr">_</span> = x[PhoneIsNotApproved-<span class="number">20206</span>]</span><br><span class="line">   <span class="attr">_</span> = x[IllegalModelName-<span class="number">20301</span>]</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">const (</span><br><span class="line">   <span class="attr">_ErrCode_name_0</span> = <span class="string">&quot;服务内部错误参数信息有误Token鉴权失败Token不存在&quot;</span></span><br><span class="line">   <span class="attr">_ErrCode_name_1</span> = <span class="string">&quot;非法数据集名称&quot;</span></span><br><span class="line">   <span class="attr">_ErrCode_name_2</span> = <span class="string">&quot;手机号格式不正确无效的验证码手机号不可重复注册该手机号未注册手机号不可重复审批该手机号未审批&quot;</span></span><br><span class="line">   <span class="attr">_ErrCode_name_3</span> = <span class="string">&quot;非法模型名称&quot;</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line">var (</span><br><span class="line">   <span class="attr">_ErrCode_index_0</span> = [...]uint8&#123;<span class="number">0</span>, <span class="number">18</span>, <span class="number">36</span>, <span class="number">53</span>, <span class="number">67</span>&#125;</span><br><span class="line">   <span class="attr">_ErrCode_index_2</span> = [...]uint8&#123;<span class="number">0</span>, <span class="number">24</span>, <span class="number">42</span>, <span class="number">69</span>, <span class="number">90</span>, <span class="number">117</span>, <span class="number">138</span>&#125;</span><br><span class="line">)</span><br><span class="line"></span><br><span class="line">func (i ErrCode) String() string &#123;</span><br><span class="line">   switch &#123;</span><br><span class="line">   case 10001 <span class="attr">-</span>= <span class="number">10001</span></span><br><span class="line">      return _ErrCode_name_0<span class="section">[_ErrCode_index_0[i]</span>:_ErrCode_index_0<span class="section">[i+1]]</span></span><br><span class="line">   case <span class="attr">i</span> == <span class="number">20101</span>:</span><br><span class="line">      return _ErrCode_name_1</span><br><span class="line">   case 20201 <span class="attr">-</span>= <span class="number">20201</span></span><br><span class="line">      return _ErrCode_name_2<span class="section">[_ErrCode_index_2[i]</span>:_ErrCode_index_2<span class="section">[i+1]]</span></span><br><span class="line">   case <span class="attr">i</span> == <span class="number">20301</span>:</span><br><span class="line">      return _ErrCode_name_3</span><br><span class="line">   default:</span><br><span class="line">      return &quot;ErrCode(&quot; + strconv.FormatInt(int64(i), 10) + &quot;)&quot;</span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>This way, we don’t have to manually create a new Map to maintain the mapping relationship!</p><blockquote><p>Note: After each addition, deletion or modification of error codes, you need to execute <code>go generate</code> to generate a new mapping file <code>errcode_string.go</code>.<br>This file is the mapping file for error codes and error messages, do not modify or delete it manually!</p></blockquote><h1 id="4-Error-Code-Practice"><a href="#4-Error-Code-Practice" class="headerlink" title="4. Error Code Practice"></a>4. Error Code Practice</h1><p>In summary, we have defined the error code message. Next,  interface to briefly demonstrate the usage.</p><p>A portion of the <code>go.mod</code> dependencies are listed below:</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">module wanx-llm-server</span><br><span class="line"></span><br><span class="line">go 1.20</span><br><span class="line"></span><br><span class="line">require (</span><br><span class="line">    github.com/gin-gonic/gin v1.9.1</span><br><span class="line">    github.com/pkg/errors v0.9.1</span><br><span class="line">    github.com/spf13/viper v1.16.0</span><br><span class="line">    github.com/swaggo/gin-swagger v1.6.0</span><br><span class="line">    github.com/swaggo/swag v1.16.1</span><br><span class="line">    go.uber.org/zap v1.25.0</span><br><span class="line">    golang.org/x/arch v0.4.0 // indirect</span><br><span class="line">    golang.org/x/tools v0.12.0 // indirect</span><br><span class="line">    google.golang.org/protobuf v1.31.0 // indirect</span><br><span class="line">    gorm.io/driver/mysql v1.5.1</span><br><span class="line">    gorm.io/gorm v1.25.4</span><br><span class="line">)</span><br></pre></td></tr></table></figure><p>Add <code>main.go</code> as the service startup entry with the following code:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line">   <span class="string">&quot;flag&quot;</span></span><br><span class="line">   <span class="string">&quot;fmt&quot;</span></span><br><span class="line">   <span class="string">&quot;os&quot;</span></span><br><span class="line"></span><br><span class="line">   <span class="string">&quot;go.uber.org/zap&quot;</span></span><br><span class="line">   _ <span class="string">&quot;wanx-llm-server/docs&quot;</span></span><br><span class="line">   <span class="string">&quot;wanx-llm-server/internal/cmd&quot;</span></span><br><span class="line">   <span class="string">&quot;wanx-llm-server/internal/global&quot;</span></span><br><span class="line">   <span class="string">&quot;wanx-llm-server/internal/initialize&quot;</span></span><br><span class="line">   util <span class="string">&quot;wanx-llm-server/internal/utils&quot;</span></span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">   configPath := flag.String(<span class="string">&quot;conf&quot;</span>, <span class="string">&quot;./config/config.yaml&quot;</span>, <span class="string">&quot;config path&quot;</span>)</span><br><span class="line">   flag.Parse()</span><br><span class="line"></span><br><span class="line">   err := initialize.Init(*configPath)</span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">      global.Logger.Error(<span class="string">&quot;server init failed&quot;</span>, zap.Any(util.ErrKey, err))</span><br><span class="line">      fmt.Printf(<span class="string">&quot;server init failed, %v\n&quot;</span>, err)</span><br><span class="line">      os.Exit(<span class="number">1</span>)</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">   r := cmd.SetupRouter()</span><br><span class="line"></span><br><span class="line">   addr := fmt.Sprintf(<span class="string">&quot;:%v&quot;</span>, <span class="number">8088</span>)</span><br><span class="line">   <span class="keyword">if</span> err := r.Run(addr); err != <span class="literal">nil</span> &#123;</span><br><span class="line">      global.Logger.Error(fmt.Sprintf(<span class="string">&quot;gin run failed, %v&quot;</span>, err))</span><br><span class="line">      <span class="keyword">return</span></span><br><span class="line">   &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><code>server.go</code> As a <code>HTTP</code> request entry, the key code is as follows：</p><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line">func <span class="built_in">SetupRouter</span>() *gin<span class="selector-class">.Engine</span> &#123;</span><br><span class="line">    <span class="attribute">r</span> := gin.<span class="built_in">Default</span>()</span><br><span class="line">    r.<span class="built_in">POST</span>(<span class="string">&quot;/api/v1/user/register&quot;</span>, userRegister)</span><br><span class="line">    return r</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">func <span class="built_in">userRegister</span>(c *gin.Context) &#123;</span><br><span class="line">requestId := c.Writer.<span class="built_in">Header</span>().<span class="built_in">Get</span>(<span class="string">&quot;X-Request-Id&quot;</span>)</span><br><span class="line">resp := &amp;err_code.Response&#123;RequestId: requestId&#125;</span><br><span class="line"></span><br><span class="line">defer <span class="built_in">func</span>() &#123;</span><br><span class="line">if resp<span class="selector-class">.Code</span> != <span class="number">0</span> &#123;</span><br><span class="line">c<span class="selector-class">.JSONP</span>(http.StatusOK, &amp;user.GenerateCodeResp&#123;Response: resp&#125;)</span><br><span class="line">&#125;</span><br><span class="line">&#125;()</span><br><span class="line"></span><br><span class="line">req := &amp;user.RegisterUserReq&#123;&#125;</span><br><span class="line">err := c.<span class="built_in">BindJSON</span>(req)</span><br><span class="line">if err != nil &#123;</span><br><span class="line">errors<span class="selector-class">.As</span>(err_code.NewCustomError(err_code.ParamBindError), &amp;resp)</span><br><span class="line">return</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">err = service<span class="selector-class">.RegisterUser</span>(requestId, req)</span><br><span class="line">if err != nil &#123;</span><br><span class="line"></span><br><span class="line">if !errors<span class="selector-class">.As</span>(err, &amp;resp) &#123;</span><br><span class="line">errors<span class="selector-class">.As</span>(err_code.NewCustomError(err_code.ServerError), &amp;resp)</span><br><span class="line">&#125;</span><br><span class="line">return</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">c<span class="selector-class">.JSONP</span>(http.StatusOK, &amp;user.RegisterUserResp&#123;</span><br><span class="line">Response: resp,</span><br><span class="line">Data:     user.RegisterUser&#123;State: service.RegisteredState&#125;,</span><br><span class="line">&#125;)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><code>service/user.go</code> To realize the specific business, the key code is as follows：</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">RegisterUser</span><span class="params">(requestId <span class="type">string</span>, req *user.RegisterUserReq)</span></span> <span class="type">error</span> &#123;</span><br><span class="line"><span class="keyword">if</span> req.Phone == <span class="string">&quot;&quot;</span> || !CheckMobile(req.Phone) &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.IllegalPhoneNum)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">smsOBj := &amp;sms.SMS&#123;</span><br><span class="line">Phone:      req.Phone,</span><br><span class="line">Code:       req.Code,</span><br><span class="line">CodeExpire: global.Config.CodeSMS.VerifyCodeExpire,</span><br><span class="line">&#125;</span><br><span class="line">codePass, msg, err := smsOBj.VerifyCode(global.RedisClient)</span><br><span class="line"><span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.ServerError)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> !codePass &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.IllegalVerifyCode)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">exist, err := (&amp;model.ApprovedTable&#123;&#125;).IsExistByPhone(req.Phone)</span><br><span class="line"><span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.ServerError)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> exist &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.PhoneRepeatedRegistered)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">ur := &amp;model.UserApproved&#123;</span><br><span class="line">Phone: req.Phone,</span><br><span class="line">State: RegisteredState,</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">_, err = (&amp;model.ApprovedTable&#123;&#125;).Insert(ur)</span><br><span class="line"><span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line"><span class="keyword">return</span> err_code.NewCustomError(err_code.ServerError)</span><br><span class="line">&#125;</span><br><span class="line"><span class="keyword">return</span> <span class="literal">nil</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>In the example, by making direct calls to the error code, we avoid the frequent steps of throwing and receiving errors, followed by <code>error_code</code> collocation.</p><p>In this way, a standardized system of error codes is created!</p><p>End of story, sprinkle flowers!</p>]]></content>
    
    
    <summary type="html">If we defined the same error once every time we encountered it with a similar errors.New(). Not only would there be a lot of duplicate code, but it would also be very difficult to sort through our error messages to web developers or third-party platforms. So we thought of unifying our error messages</summary>
    
    
    
    <category term="Go" scheme="https://www.nablepart.com/categories/Go/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="messages" scheme="https://www.nablepart.com/tags/messages/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="defined" scheme="https://www.nablepart.com/tags/defined/"/>
    
    <category term="third-party" scheme="https://www.nablepart.com/tags/third-party/"/>
    
    <category term="encountered" scheme="https://www.nablepart.com/tags/encountered/"/>
    
  </entry>
  
  <entry>
    <title>Go Installation and Configuration Run</title>
    <link href="https://www.nablepart.com/a3a25f711f15/"/>
    <id>https://www.nablepart.com/a3a25f711f15/</id>
    <published>2023-11-06T03:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h3 id="Go-Installation-and-Configuration-Run"><a href="#Go-Installation-and-Configuration-Run" class="headerlink" title="Go Installation and Configuration Run"></a>Go Installation and Configuration Run</h3><blockquote><p>A must-see guide to configuring and running Go environment variables for newbies.</p></blockquote><p>Go download link:<a href="https://studygolang.com/dl">studygolang.com&#x2F;dl</a></p><p>We can go directly to the Go language Chinese website to find the latest version of the current stable operation (currently 2020-11-08 is 1.15.4), and then find a Windows system to download (as shown in the figure):</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/qB5Drm4uRx7JYhZ.webp"></p><blockquote><p>There are two options for downloading the zip archive or the msi installer. To save time, we’ll download a zip archive, the first one in the red box, go.15.4.windows-386.zip.</p></blockquote><p>Once the download is complete, unzip it to a directory on your computer, this is the directory I unzipped locally, and then go to the bin directory after unzipping the package:</p><p><img src="https://s2.loli.net/2023/11/07/kbqYEshRpou75cJ.webp"></p><p><img src="https://s2.loli.net/2023/11/07/UJ5TPBENpFvQcK3.webp"></p><p>Then right click on this computer (for Win7 etc. open My Computer) and go to the properties page:</p><p><img src="https://s2.loli.net/2023/11/07/98NxDTUns4LzQ13.webp"></p><p>Open Advanced System Settings, Environment Variables, and edit the value of Path in System Variables:</p><p><img src="https://s2.loli.net/2023/11/07/wJuDaH5BX8IvKqQ.webp"></p><p>Then add the address of the bin directory:</p><p><img src="https://s2.loli.net/2023/11/07/5t3ANlVDG7kphQH.webp"></p><p>And finally, to check if we’ve configured it successfully, first, call out the command line (shortcut Win + R) and type cmd:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/12d346963d37400da534cd6b7bbbaadf%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>在任意目录下输入 go version：</p><p><img src="https://s2.loli.net/2023/11/07/I3lRJ8qUB75OWus.webp"></p><blockquote><p>The Go version 1.15.4 we just downloaded appears, la la la la ~~ Congratulations, the configuration is successful!</p></blockquote><h3 id="运行-Go-程序"><a href="#运行-Go-程序" class="headerlink" title="运行 Go 程序"></a>运行 Go 程序</h3><p>Now we can run our Go program directly from the command line.</p><p>First, from the command line, go to the Go file directory. For example, my current helloworld.go is in the C:\Users\37595\Desktop\KnowledgeBase\code\1_helloworld directory:</p><p><img src="https://s2.loli.net/2023/11/07/2a84neb3OM5NlDA.webp"></p><ol><li><h5 id="There-are-two-ways-to-run-it"><a href="#There-are-two-ways-to-run-it" class="headerlink" title="There are two ways to run it:"></a>There are two ways to run it:</h5><ol><li>go build compiles helloworld, and after successful compilation, a new .exe file is added to the directory:</li></ol></li></ol><p><img src="https://s2.loli.net/2023/11/07/hwlHPYmL7fvuMa1.webp"></p><blockquote><p>Execute the compiled file:</p></blockquote><p><img src="https://s2.loli.net/2023/11/07/VoeOrpyJRgn6PuM.webp"></p><ol><li>The go run command is executed directly:</li></ol><p><img src="https://s2.loli.net/2023/11/07/AI36lUkyOFNEfnR.webp"></p><ul><li><h3 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h3><ul><li>You may have noticed that we didn’t use the traditional installer method to configure the Go environment, but instead downloaded the .zip file and unzipped it to configure the environment variables. The difference between this and the .msi installation is that it doesn’t write any configuration information into our computer’s registry, and it can’t generate shortcuts. Therefore, if you are already familiar with the installation process, or want to save time, you can choose the .zip installation method directly, and you can also run the Go program normally!</li><li>After compiling the Go program, it will be an executable .exe file, which can still be run on computers that do not have the Go compilation package installed and configured, just like Java and other programming languages, which can be compiled once and run everywhere. The difference is that the .class bytecode file generated after Java compilation needs to be run on the JVM (Java Virtual Machine), while the .exe file compiled by Go can be opened and run directly on a Windows machine.</li></ul></li></ul><blockquote><p>PS：Some operating system computers will flash back after opening the .exe file directly, in this case, you can add a line at the end of the code to get the input parameter code from the keyboard on it:</p></blockquote><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">fmt.<span class="built_in">Scanf</span>(<span class="string">&quot;a&quot;</span>)</span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">You may have noticed that when we configure the Go language environment, we don&#39;t use the traditional installation package, but directly download the .zip file and unzip it to configure the environment variables, which is different from the .msi installation in that it doesn&#39;t write some configuration information into our computer&#39;s registry, and it can&#39;t generate shortcuts. The Go program is compiled into an executable .exe file.，此…</summary>
    
    
    
    <category term="Go" scheme="https://www.nablepart.com/categories/Go/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="environment" scheme="https://www.nablepart.com/tags/environment/"/>
    
    <category term="executable" scheme="https://www.nablepart.com/tags/executable/"/>
    
    <category term="variables" scheme="https://www.nablepart.com/tags/variables/"/>
    
  </entry>
  
  <entry>
    <title>Git is the leading code management tool</title>
    <link href="https://www.nablepart.com/aeb29496a1a1/"/>
    <id>https://www.nablepart.com/aeb29496a1a1/</id>
    <published>2023-11-06T02:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h3 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h3><p>Ever since I started working, I’ve been using code collaboration tools, most often Git.</p><p>However, I recently realized that many people (including myself) are only familiar with pulling and committing code, but don’t even know how to use <code>git revert/rebase</code>.</p><p>So I looked up some information and wrote this article, which I believe developers will find useful after reading this article.</p><h3 id="2-What-is-Git"><a href="#2-What-is-Git" class="headerlink" title="2. What is Git?"></a>2. What is Git?</h3><p><img src="https://s2.loli.net/2023/11/07/MQEvm5VbZtoxKLU.webp"></p><p>Whether working in a large company or small company coder, or individual developers. Whether it is multi-person collaborative development, or to realize a single upload, multiple downloads, all need to use ** code version control system **.</p><p>And Git is the best of the best for code versioning. What, you don’t like SVN? Let me list the advantages of Git, how should you respond?</p><ol><li>Git is distributed versioning, SVN is not. 2;</li><li>Git content is stored as metadata, whereas SVN uses files;</li><li>Git uses SHA-1 hash algorithm for content storage, so for content integrity, Git beats SVN. 4;</li><li>in terms of market share, developers using Git far exceed SVN.</li></ol><p>Having said that, I don’t want to show off anything, but just let you recognize the status of Git.</p><p>Nowadays, if you’re a freshman or a hired developer and you’re not skilled in Git, you’ll have to walk around the company with your head down, so don’t ask me how I know (bushi).</p><h3 id="3-Installation-and-Configuration"><a href="#3-Installation-and-Configuration" class="headerlink" title="3. Installation and Configuration"></a>3. Installation and Configuration</h3><p>Windows:</p><blockquote><p>Installation package download address: <a href="https://gitforwindows.org/">gitforwindows.org&#x2F;</a></p></blockquote><p>Mac:</p><blockquote><p>[sourceforge.net&#x2F;projects&#x2F;gi…] (<a href="http://sourceforge.net/projects/git-">http://sourceforge.net/projects/git-</a> osx-installer&#x2F;“)</p></blockquote><p>In Windows, for example, after installation, you can type “Git” -&gt; “Git Bash” in the start menu to get to the Git window for commands:</p><p><img src="https://s2.loli.net/2023/11/07/51YOJbi2B4vA9jM.webp"></p><h3 id="4-Pulling-repository-code"><a href="#4-Pulling-repository-code" class="headerlink" title="4 Pulling repository code"></a>4 Pulling repository code</h3><p>First, choose a directory as our code repository, that is, the place where we store our code items. Usually, we choose the D drive:</p><p><img src="https://s2.loli.net/2023/11/07/z16P9eEbX7HDpNC.webp"></p><p>Then go to Git and get the repository address, e.g., copy the GitHub repository [github.com&#x2F;yangfx15&#x2F;co…] directly. (<a href="https://github.com/yangfx15/coder">https://github.com/yangfx15/coder</a>):</p><p><img src="https://s2.loli.net/2023/11/07/OrqV1yX8GocJ2Wm.webp"></p><p>Then run <code>git clone https://github.com/yangfx15/coder.git</code> in Git to pull the code and get into the coder directory：</p><blockquote><p>git clone <a href="https://link.juejin.cn/?target=https://github.com/yangfx15/coder.git" title="https://github.com/yangfx15/coder.git">github.com&#x2F;yangfx15&#x2F;co…</a><br>cd coder</p></blockquote><p><img src="https://s2.loli.net/2023/11/07/xU5HRVqOPJloQe6.webp"></p><p>Seeing the <code>(main)</code> flag means that the remote code has been downloaded to the local repository!</p><h3 id="5-Code-Branching"><a href="#5-Code-Branching" class="headerlink" title="5 Code Branching"></a>5 Code Branching</h3><p>When it comes to collaborating on code, naturally code branches are involved.</p><p><img src="https://s2.loli.net/2023/11/07/OWgzixDU3LX8j9m.webp"></p><p>There are four ways to do this: show branches, switch branches, create branches, and delete branches.<br><strong>git branch</strong> <strong>List all local branches</strong> <strong>git branch -r</strong> <strong>List all remote branches</strong> <strong>git branch -a</strong> <strong>List all local and remote branches</strong> <strong>git branch</strong> <strong>Create a new branch, but remain in the current branch</strong> <strong>git checkout -b</strong> <strong>Creates a new branch and switches to it</strong> <strong>git branch –track</strong> <strong>Creates a new branch and establishes a tracking relationship with the specified remote branch</strong> <strong>git checkout</strong> <strong>Switches to the specified branch and updates the workspace</strong> <strong>git branch -d</strong> <strong>Deletes the branch</strong> <strong>git push origin –a</strong> **Lists all local and remote branches. * <strong>git push origin –delete</strong> <strong>Deletes the remote branch</strong> **Remote branch deletion</p><p>There are a lot of operations on branches, but they are relatively simple to remember.</p><h4 id="5-1-Branch-Common-Operations"><a href="#5-1-Branch-Common-Operations" class="headerlink" title="5.1 Branch Common Operations"></a>5.1 Branch Common Operations</h4><p>Zhangsan and Lisi are co-developers, and they each have personal branches under their <code>main</code> master branch: feat_zhangsan, feat_lisi.</p><blockquote><p>git checkout -b feat_zhangsan<br>git checkout -b feat_lisi</p></blockquote><p>Zhangsan develops feature A, Lisi develops feature B. Zhangsan pushes all local code to the remote branch when he’s finished developing</p><blockquote><p>git add .<br>git commit -m “feature A”<br>git push origin feat_zhangsan<br>git branch –set-upstream-to&#x3D;origin&#x2F;feat_zhangsan</p></blockquote><p>Read on if you are not sure about these steps.</p><h3 id="6-Code-Push-Management"><a href="#6-Code-Push-Management" class="headerlink" title="6. Code Push Management"></a>6. Code Push Management</h3><p><img src="https://s2.loli.net/2023/11/07/hdfpz1w3nOBLUP6.webp"></p><h4 id="6-1-add"><a href="#6-1-add" class="headerlink" title="6.1 add"></a>6.1 add</h4><p>The add command is a simple command that commits changes made in your local workspace to a staging area to be managed by git.<br>**git add . </p><p>The opposite of add is <code>reset</code>, which undoes changes to the staging area.<br>**git reset . ** **Reset all staging area file changes in the current directory ** <strong>git reset</strong> **Reset a directory, including subdirectories, from the staging area ** <strong>git reset</strong> **Reset a file from the staging area ** <strong>git reset</strong> **Reset a file, including subdirectories, from the staging area ** **git reset</p><h4 id="6-2-commit"><a href="#6-2-commit" class="headerlink" title="6.2 commit"></a>6.2 commit</h4><p>Commit is a simple command that commits the contents of the staging area to the local repository and moves the HEAD of the current branch back one commit point.<br><strong>git commit -m</strong> **Commit the staging area to the local repository, message stands for message ** <strong>git commit -m</strong> **Commit the specified file in the staging area to the local repository ** <strong>git commit –amend -m</strong> **Replace the previous commit with a new one ** <strong>git commit –amend -m</strong> **Replace the previous commit with a new one ** <strong>git commit –amend -m</strong> **Replace the previous commit with a new one</p><h4 id="6-3-reset"><a href="#6-3-reset" class="headerlink" title="6.3 reset"></a>6.3 reset</h4><p>The opposite of commit is <code>reset --soft</code>, which undoes the commit but leaves the written code intact.<br><strong>git reset –soft HEAD^</strong> ** to undo the most recent commit, HEAD^ means the previous version, but you can also use HEAD1. If you want to undo the previous two commits, you can use HEAD2** <strong>git reset –hard HEAD^</strong> ** which is similar to <code>git reset --soft </code>, except that <code>hard</code> undoes the git add at the same time** **where HEAD is the first commit in Git.</p><p>HEAD is Git’s notion of a commit version that always points to the most recent commit on the branch you’re currently on. If your branch changes, or if a new commit point is created, HEAD will change.</p><p>What if we want to go back to a commit point?</p><p>In this case, we can use <code>git log --pretty=oneline</code> to get the commit history:</p><p><img src="https://s2.loli.net/2023/11/07/Pf2rX64DLENjoIW.webp"></p><p>Assuming we want to roll back to the “article link update” commit point, we need to copy the previous commit_id: <code>cbfa2e854bcc3b06e91682087270fe483be9e37c</code> and type q to exit.</p><p>Then roll back to this commit with <code>git reset --hard cbfa2e854bcc3b06e91682087270fe483be9e37c</code>.</p><h4 id="6-4-status"><a href="#6-4-status" class="headerlink" title="6.4 status"></a>6.4 status</h4><p>On Git, you can check the status of your code with <code>git status</code>.<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/XrTaxPsdRHneOF7.webp"></p><p>As shown in the figure, when the code is in the workspace, the modified files appear in red; after the code is in the staging area, the modified files appear in green; after the code is committed to the local repository, it shows <code>nothing to commit, working tree clean</code>.</p><h4 id="6-5-Common-Operations"><a href="#6-5-Common-Operations" class="headerlink" title="6.5 Common Operations"></a>6.5 Common Operations</h4><p>When ZhangSan finishes development on his personal branch, he pushes the code to the remote branch and merges the code from his personal branch into the <code>main</code> master branch.</p><blockquote><p>feat_zhangsan分支：git add .<br>feat_zhangsan分支：git commit -m “功能A2”<br>feat_zhangsan分支：git push<br>feat_zhangsan分支：git checkout main<br>main分支：git fetch<br>main分支：git pull<br>main分支：git merge origin&#x2F;feat_zhangsan<br>main分支：git push</p></blockquote><h3 id="7-Code-merge-management"><a href="#7-Code-merge-management" class="headerlink" title="7. Code merge management"></a>7. Code merge management</h3><h4 id="7-1-merge"><a href="#7-1-merge" class="headerlink" title="7.1 merge"></a>7.1 merge</h4><p>The merge command merges code from different branches.</p><p><img src="https://s2.loli.net/2023/11/07/EPtxQHRzTmer5iu.webp"></p><p>As shown above, in practice, we may cut out a dev branch from the master branch, and then develop it to complete the requirements, after many commits, and then merge it into the master when the development is completed.<br><strong>git fetch</strong> <strong>pulls the latest code from the remote repository before merging, and after fetching, you can add the specified remote branch; if you don’t specify it, it defaults to the remote branch of the current branch</strong> <strong>git pull</strong> <strong>keeps the code in the current branch up to date before merging</strong> <strong>git merge</strong> <strong>merges the code from the specified branch to the current branch</strong> <strong>Git merge</strong> **Merge the code from the specified branch into the current branch.</p><p>Generally, after merging, there will be a conflict, and you will need to manually resolve the conflict. This is due to multiple users modifying the same area of the same file.</p><p>For example, in the above figure, both the v0.2 and dev branches have modified a file on the master branch, and when the dev branch merges into master, you need to resolve the merge conflict.</p><h4 id="7-2-rebase"><a href="#7-2-rebase" class="headerlink" title="7.2 rebase"></a>7.2 rebase</h4><p>A rebase, also known as a diff &amp; rebase, is an alternative to a merge.</p><p><img src="https://s2.loli.net/2023/11/07/bAqPawYK1iGRFve.webp"></p><p>At the beginning, we’re on the dev branch, so if you run git rebase master, any new commits on the dev branch will be repeated on the master branch, and the checkout will switch back to the dev branch. This is the same as merging; the branch you’re on doesn’t change before or after the merge.</p><p>git rebase master means that the dev branch wants to continue on the shoulders of the master.</p><p>Like merge, rebase requires manual conflict resolution.</p><h4 id="7-3-The-Difference-Between-a-Rebase-and-a-Merge"><a href="#7-3-The-Difference-Between-a-Rebase-and-a-Merge" class="headerlink" title="7.3 The Difference Between a Rebase and a Merge"></a>7.3 The Difference Between a Rebase and a Merge</h4><p>Now we have two branches, dev and master, with the following commits:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">    D-<span class="attr">--E</span> dev</span><br><span class="line">    /</span><br><span class="line"><span class="selector-tag">A</span>-<span class="attr">--B---C---F</span> master</span><br></pre></td></tr></table></figure><p>Execute git merge dev on master and you will get the following result:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">    D---------<span class="attr">--E</span></span><br><span class="line">    /                    \</span><br><span class="line"><span class="selector-tag">A</span>-<span class="attr">--B---C---F----G</span>   master</span><br></pre></td></tr></table></figure><p>As you can see, the merge operation creates a new node with the previous commits displayed separately,</p><p>This is equivalent to a tree growing new branches, which are then merged into the main trunk!</p><p>If you have a lot of branches merged in this way, it looks like a mess, and for those with OCD, the commit history from this merge will look really ugly.</p><p>At this point, some people may ask: why can’t Git’s commit history be a clean, straight line? The answer is rebase.</p><p>Run git rebase dev on your master, and you’ll get the following result:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="selector-tag">A</span>-<span class="attr">--B---D---E---C</span>&#x27;-<span class="attr">--F</span>&#x27; master</span><br></pre></td></tr></table></figure><p>The rebase operation does not generate a new node, it fuses the two branches into a single linear commit.</p><p>To summarize:</p><ul><li>If you want a clean, linear history tree with no merge commits, then you should choose git rebase;</li><li>If you want to keep a complete history and want to avoid the risk of rewriting your commit history, you should choose to use git merge.</li></ul><h4 id="7-4-revert"><a href="#7-4-revert" class="headerlink" title="7.4 revert"></a>7.4 revert</h4><p>git revert undoes a commit, using a new commit to remove the changes made by a history commit.</p><p><img src="https://s2.loli.net/2023/11/07/14vwuxCUsgRoW53.webp"></p><p>When undoing, revert commits a new version that reverses the content of the version that needs to be revert. The HEAD version is then incremented without affecting the previous commit.<br><strong>git revert HEAD</strong> <strong>Revert previous commit</strong> <strong>git revert HEAD^</strong> <strong>Revert previous two commits</strong> <strong>git revert {commit_id}</strong> <strong>Pins the specified version, and saves the revert itself as a commit</strong> **Revert is not a commit.</p><h4 id="7-5-Difference-between-revert-and-reset"><a href="#7-5-Difference-between-revert-and-reset" class="headerlink" title="7.5 Difference between revert and reset"></a>7.5 Difference between revert and reset</h4><p><img src="https://s2.loli.net/2023/11/07/XJ7ukoPMRwrBdtl.webp"></p><p>git revert rolls back previous commits with a new commit, and git reset deletes the specified commit.</p><p>The effect is similar when you roll it back, but there is a difference when you merge previous versions later. This is because revert adds a new reverse commit, which is equivalent to neutralizing the acid and base, so when you merge with the old branch later, this part of the change won’t reappear!</p><p>But reset is equivalent to sealing the acid, so when you merge in the future, the reset part of the code will still appear in the history branch, which may cause conflicts.</p><p>The difference between the two is equivalent to a chemical reaction and a psychological reaction. Why is this so?</p><p>It’s because Git, as a code versioning tool, doesn’t actually delete the commit every time it’s deleted; it seals the commit.</p><p>Just like the painful memories you get when you fall out of love, if your brain goes through a shock, you can’t delete those memories, you can just seal them. A revert is a way of channeling those painful memories, so that even if you remember them later, they won’t be as painful :)</p><p>Note that git reset moves the version HEAD back a bit, whereas in git revert the version HEAD moves on.</p><h4 id="7-6-Other-common-commands"><a href="#7-6-Other-common-commands" class="headerlink" title="7.6 Other common commands"></a>7.6 Other common commands</h4><p><strong>git diff</strong> <strong>Shows the difference between the staging area and the workspace</strong> <strong>git diff HEAD</strong> <strong>Shows the difference between the workspace and the latest commit in the current branch</strong> <strong>git cherry-pick</strong> <strong>Selects a commit and merges it into the current branch</strong> <strong>git rm</strong> <strong>Removes the file from the staging area and the workspace</strong> <strong>git rm</strong> <strong>Removes the file from the staging area and the workspace remove files from staging areas and workspaces</strong> <strong>git mv</strong> <strong>move or rename workspace files</strong> <strong>git blame</strong> <strong>view the history of changes to a given file as a list</strong> <strong>git remote</strong> **remote repository operation</p><p>These are some common commands and detailed description of Git, enough to cover all the daily operations of your study and work, I believe that after reading this, you can have a more in-depth understanding of Git.</p>]]></content>
    
    
    <summary type="html">Since I&#39;ve been working, I&#39;ve been using code collaboration tools, most often Git. But recently I&#39;ve realized that a lot of people (including myself) are only familiar with pulling and committing code on a day-to-day basis, and don&#39;t even know how to use git reset/rebase...</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="Crawler" scheme="https://www.nablepart.com/tags/Crawler/"/>
    
    <category term="absolutely" scheme="https://www.nablepart.com/tags/absolutely/"/>
    
    <category term="selenium" scheme="https://www.nablepart.com/tags/selenium/"/>
    
    <category term="Git" scheme="https://www.nablepart.com/tags/Git/"/>
    
  </entry>
  
  <entry>
    <title>Come on, let&#39;s get a netbook system</title>
    <link href="https://www.nablepart.com/ed6b426b3480/"/>
    <id>https://www.nablepart.com/ed6b426b3480/</id>
    <published>2023-11-06T00:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-The-Melody-of-Youth"><a href="#1-1-The-Melody-of-Youth" class="headerlink" title="1.1 The Melody of Youth"></a>1.1 The Melody of Youth</h2><p>For example, I used to like the music talent VAE, whether it is the small bridge eaves under the night in Jiangnan, or the wild store in Guanwai where the smoke and fire are extinguished and the customers can’t sleep, or the purple smoke and fragrance, or the astonishing side of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow of the shadow.</p><p>Those touching melodies would suddenly haunt my mind in the dusk after a nap, or in the morning after a heavy rain, and could not be dispersed for a long time.<img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/b4d8594758454caa9095584856f0e405~tplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>I don’t know if it’s because I miss the people I used to listen to songs with or the memories of my youth.</p><h2 id="1-2-The-Wonderful-Use-of-Netflix"><a href="#1-2-The-Wonderful-Use-of-Netflix" class="headerlink" title="1.2 The Wonderful Use of Netflix"></a>1.2 The Wonderful Use of Netflix</h2><p>Nostalgic or not, the songs are to be listened to, and the membership is impossible to charge 🐶</p><p>So, the witty (pinqiong) me began to look for free resources on all major platforms. I have to say, the Internet is really a great invention, as long as the electricity is connected to the Internet, there is no resource that can not be found.</p><p>Especially the net disk system, really a good hand for resource sharing!</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/048c70c470f5435a938bcd9d338f377f%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>I’m sure you’ve all used a netbook, from storing photos to sharing work documents, it has become an integral part of our lives.</p><p>But have you ever wondered what kind of system design is behind to support these functions? Today, let’s explore the architecture design of a Netdisk system.</p><h1 id="2-Web-Disk-System"><a href="#2-Web-Disk-System" class="headerlink" title="2. Web Disk System"></a>2. Web Disk System</h1><p>Baidu.com is a popular cloud storage and file sharing platform with over 800 million users and 10w+PB of storage capacity.</p><p>In this article, we will delve into the core features of the Baidu Netdisk system and how to deal with the challenges that may arise from high concurrency and massive storage.</p><h2 id="2-1-Architecture-Overview"><a href="#2-1-Architecture-Overview" class="headerlink" title="2.1 Architecture Overview"></a>2.1 Architecture Overview</h2><p>The system design of Baidu.com Disk adopts a distributed architecture to cope with the huge number of users and massive storage demand. The core components include:</p><ol><li><strong>Client Layer</strong>: for receiving and distributing user requests from different devices, splitting and assembling file resources, and interacting directly with back-end services.</li><li><strong>Application microservices</strong>: handle core business logic, such as file upload and download, file sharing, permission control, VIP speed limit, etc.</li><li><strong>Relational DB system</strong>: used for persistent storage of user files and metadata, as well as basic information such as user rights.</li><li><strong>Message Queue</strong>: asynchronous peak shaving decoupling, improve write performance, reduce database load and the pressure of frequent communication between applications.</li><li><strong>Registry and Cache</strong>: Application nodes regularly report the IP nodes and ports of servers to the registry so that other servers can call them in real time. The cache can store authentication information such as Token or application hotspot data.</li><li><strong>Distributed File System</strong>: Used to store unstructured files, such as pictures, audio, video and other data, with high storage efficiency and good scalability.</li></ol><h2 id="2-2-Challenges-of-Billions-of-Users"><a href="#2-2-Challenges-of-Billions-of-Users" class="headerlink" title="2.2 Challenges of Billions of Users"></a>2.2 Challenges of Billions of Users</h2><p>For a storage system like NetDisk, a large amount of data is generated and transmitted every day.</p><p>Taking Baidu NetDisk as an example, the number of users has exceeded 800 million by the end of 2022, and the storage capacity has long exceeded 10w+PB (i.e., 100+ billion GB).</p><p>Therefore, designing an online disk system has the following challenges.</p><h3 id="Large-storage-volume"><a href="#Large-storage-volume" class="headerlink" title="Large storage volume"></a>Large storage volume</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e917a61b4f164279b41635269807fc00%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Baidu.com currently has more than 800 million users, the average user’s maximum storage capacity is 1 TB, the storage capacity is calculated on the basis of 100 billion gigabytes (GB), each user is almost 100+GB [1000GB&#x2F;8] of storage, and the average utilization rate of the storage space is 10%.</p><h3 id="High-throughput"><a href="#High-throughput" class="headerlink" title="High throughput"></a>High throughput</h3><p>The daily activity of Baidu.com is 200 million, and the proportion of daily active users is about 25%, and each user visits the disk 4 times on average.</p><p>Therefore, the QPS of Baidu.com is about 10,000 [200 million users _4 times &#x2F; (24_3600 seconds)], and the peak period is twice the average QPS, i.e. 20,000</p><h3 id="Network-bandwidth"><a href="#Network-bandwidth" class="headerlink" title="Network bandwidth"></a>Network bandwidth</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/03dc3aff4380446e91d55706729df0b4%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Assuming that the average file size of each user’s last download is 2MB, the network bandwidth load is about 18GB&#x2F;s (200 million _4_2M&#x2F;(24 _3600_1024G)), i.e., 144Gb&#x2F;s. The peak traffic bandwidth is about 2 times the average, which is about 288Gb&#x2F;s.</p><h2 id="2-3-Functional-Requirements"><a href="#2-3-Functional-Requirements" class="headerlink" title="2.3 Functional Requirements"></a>2.3 Functional Requirements</h2><p>The common functions are as follows:</p><ol><li>support users to register and log in to NDN, open VIP, and log out of the account. 2. upload files and download files.</li><li>Upload and download files. 3.</li><li>add friends and share files among friends. 4. add, modify and delete storage accounts.</li><li>add, modify and delete storage directories. 5.</li><li>Rename file data or delete unwanted files. 6.</li><li>Allow to send files to friends, or share files to strangers via links.</li></ol><h2 id="2-4-Non-Functional-Requirements"><a href="#2-4-Non-Functional-Requirements" class="headerlink" title="2.4 Non-Functional Requirements"></a>2.4 Non-Functional Requirements</h2><p>The following requirements are needed for the current design of the web hosting system:</p><ol><li>massive data storage: 800 million registered users, about 25% of active users, 100 million TB of space.</li><li>high concurrent access: average 10,000 QPS, peak 20,000 QPS. 3. high traffic load: average network traffic, average network traffic, average network traffic, average network traffic, average network traffic, average network traffic.</li><li>High traffic load: average network bandwidth 144Gb&#x2F;s, peak 280Gb&#x2F;s. 4. Highly reliable storage: files cannot be stored on the network.</li><li>Highly reliable storage: files cannot be lost, and the reliability of persistent storage reaches 6 9s, i.e. 1 million files are lost or damaged at most 1 file.</li><li>Highly available service: Users can upload normally, and the availability of download function reaches 4 9s, i.e., it is unavailable for at most 53 minutes (365 _24_60*0.0001) a year.</li><li>Privilege Control: Files need to be stored in isolation, except for the user’s own and shared files, the rest of the files can not be seen by others.</li></ol><h1 id="3-Core-Functions"><a href="#3-Core-Functions" class="headerlink" title="3. Core Functions"></a>3. Core Functions</h1><h2 id="3-1-File-Upload-and-Download"><a href="#3-1-File-Upload-and-Download" class="headerlink" title="3.1 File Upload and Download"></a>3.1 File Upload and Download</h2><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/bd2752a4449447de8e059b1aa20aa17f%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h4 id="File-Upload"><a href="#File-Upload" class="headerlink" title="File Upload"></a>File Upload</h4><p>Users upload files through the Nethub client or web interface. After the upload request passes through the client application layer, in order to ensure the reliability of large file uploads, we can slice and upload files according to their sizes.</p><p>Then the client calls the application microservice to process the file basic data (metadata) and file content, and asynchronously upload the metadata and file content data respectively.</p><h4 id="File-Download"><a href="#File-Download" class="headerlink" title="File Download"></a>File Download</h4><p>When a user requests to download a file, the client layer sends the request to the application microservice.</p><p>In order to increase the download speed, file blocks can be downloaded concurrently from the server, and then the files are assembled on the client side and returned to the user’s device.</p><h2 id="3-2-File-Sharing"><a href="#3-2-File-Sharing" class="headerlink" title="3.2 File Sharing"></a>3.2 File Sharing</h2><h4 id="Friends-Sharing"><a href="#Friends-Sharing" class="headerlink" title="Friends Sharing"></a>Friends Sharing</h4><p>Users can share files or folders with their friends. When sharing, they can specify read-only or storage permissions for their friends and the time period for file sharing.</p><ul><li>Read-only permission: When a friend receives read-only permission, he&#x2F;she can only view the contents of the file or folder, but cannot save, modify or delete the file.</li><li>Dump permission: When a friend receives storage permission, he&#x2F;she can choose to dump the file to his&#x2F;her own storage space within the time limit, and he&#x2F;she can share the file again.</li></ul><h4 id="Link-Sharing"><a href="#Link-Sharing" class="headerlink" title="Link Sharing"></a>Link Sharing</h4><p>Users can share files or folders via links. The permission to share via <strong>links is dumping permission</strong> by default, and the sharing scope can be set as public, private or restricted to specific users.</p><ul><li>Public scope: Anyone can access the file or folder, and can dump the file to their own storage space.</li><li>Private Scope: Generate a link to facilitate opening the file, only the user himself can access it.</li><li>Specific user scope: allow the user’s friends or specified to share to someone, when other people open the link shows no permission to access.</li></ul><h1 id="4-Detailed-Design"><a href="#4-Detailed-Design" class="headerlink" title="4. Detailed Design"></a>4. Detailed Design</h1><h2 id="4-1-File-storage-and-metadata-management"><a href="#4-1-File-storage-and-metadata-management" class="headerlink" title="4.1 File storage and metadata management"></a>4.1 File storage and metadata management</h2><h3 id="Separate-storage"><a href="#Separate-storage" class="headerlink" title="Separate storage"></a>Separate storage</h3><p>Since relational databases like MySQL are not suitable for storing large data files, and file systems like HDFS and Ceph are very slow in data query.</p><p>So we divide file data into metadata and file content and store them separately, where:</p><ul><li>Metadata: including file owner, file permissions, file type, sharing information and other basic information, stored in the relational database MySQL inside.</li><li>File content: the specific information of the file, such as pictures, audio, video and other multimedia data, saved in the object storage service, such as Ceph distributed object storage server.</li></ul><p>The system responsible for responding to requests for metadata and file content is also divided into two systems, File Metadata Management (FMM) and File Content Management (FCM).</p><p>The architecture diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/73381891e1e6412a9b117f632582f24e%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Since user files may include large files such as video, audio, etc., but Ceph is not suitable for storing too large files, we split the uploaded file content and divide the large file into many small blocks in order to better upload and download large files.</p><p>This also has the advantage that when large files are uploaded and downloaded in blocks, these blocks can be processed concurrently and then assembled in the SDK to speed up the file transfer. Moreover, when the user’s network is disconnected, we only need to re-transmit the remaining file blocks, thus realizing the function of resuming uploading after a break.</p><h3 id="File-upload"><a href="#File-upload" class="headerlink" title="File upload"></a>File upload</h3><p>The sequence of file upload is as follows:</p><p>! [](<a href="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0470423387444ea188ce4cd887a284cd%7Etplv-k3u1fbpfcp-jj-mark%3A3024%">https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0470423387444ea188ce4cd887a284cd%7Etplv-k3u1fbpfcp-jj-mark%3A3024%</a> 3A0%3A0%3A0%3Aq75.awebp)</p><p>After the user uploads a file, the client application divides the file into blocks according to the size of the file uploaded by the user, assuming that one block is generated for every 8M, and then uploads the corresponding MD5 value information of the block to the metadata management system (FMM).</p><p>The FMM determines whether there are duplicates in the MD5 values from the list of uploaded blocks. If they are new MD5 file blocks, they are assigned ids and stored in each file metadata table.</p><p>The FMM then generates an access token and returns it to the client along with the list of blockIds and a list of available FMM servers.</p><p>When the client receives the response from the FMM, it compares the MD5 values and determines which blocks of files need to be uploaded. Then, with the Token, the block IDs and the contents of the blocks to be uploaded, the client passes them into the available nodes of the FMM and stores the real blocks in the object storage system Ceph.</p><p>When a client requests FCM with a list of blockIds, FCM needs to call FMM again for user authentication to ensure that the blockIds are from FMM and not forged by the user.</p><p>However, for the sake of overall architectural simplicity, we use cached tokens instead of internal API calls, which on one hand reduces system interactions and on the other hand improves the overall response speed.</p><h3 id="File-Download-1"><a href="#File-Download-1" class="headerlink" title="File Download"></a>File Download</h3><p>The file download timeline is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ae6293eb1345438f85e4cb685b46a07c%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>When a user downloads a file, the client passes in the file name, user, and other information to get the metadata of the file from the FMM.</p><p>Then, the FMM server queries MySQL for the blockId list of the corresponding user’s file, obtains the list of accessible FMM servers from ZK, generates an access token from Redis, and returns it to the client.</p><p>Based on the server list of FCM and the blockId list, the client calls the FCM server to download the file blocks concurrently. After downloading all the file blocks, the client assembles the file blocks into a complete file and returns it to the user’s device.</p><h3 id="Table-Design"><a href="#Table-Design" class="headerlink" title="Table Design"></a>Table Design</h3><ol><li><strong>User table</strong>: record user key information, ID, user name, cell phone number, used space, user type (VIP, civilian), etc. 2. <strong>File table</strong>: record user key information, ID, user name, cell phone number, used space, user type, etc. 3.</li><li><strong>File table</strong>: record file metadata information, store the tree structure of files, including file ID, name, user, parent file ID, number of child files, creation time, file size, etc. 3. <strong>File_block table</strong>: record user key information, ID, user name, cell phone number, used space, user type (VIP, civilian, etc.).</li><li><strong>File_block table</strong>: records the specific information of the file block, ID, file ID, MD5 value of the file block, and so on.</li></ol><h3 id="Upload-and-download-speed-limit"><a href="#Upload-and-download-speed-limit" class="headerlink" title="Upload and download speed limit"></a>Upload and download speed limit</h3><p>When designing the web disk, considering the relatively large number of users and storage volume of the system, we add both the application system and storage servers into the cluster, and integrate load balancing, service gateway and other infrastructures in order to provide the ability of failover, high availability, and elastic scaling.</p><p>And based on the cost considerations of memory and network bandwidth, we can’t just add machines to ensure the upload and download rate of users, and based on commercialization considerations, we can limit the speed of ordinary users who are not members.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/fcda5736b8594a5b9e4161a85a7a07c8%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>The implementation is as follows: when a client requests the FMM system to perform an upload or download task, we first get the user type of the user, and if it is a civilian user, we can reduce the number of servers appropriately when returning the list of available FCM nodes to the client.</p><p>For example, a VIP user can enjoy 50 servers for uploading and downloading at the same time, while a common user is only assigned 5 servers for uploading and downloading files.</p><h2 id="4-2-File-Sharing"><a href="#4-2-File-Sharing" class="headerlink" title="4.2 File Sharing"></a>4.2 File Sharing</h2><h3 id="RBAC-Privilege-Control"><a href="#RBAC-Privilege-Control" class="headerlink" title="RBAC Privilege Control"></a>RBAC Privilege Control</h3><p>Since the file sharing of NDN can be modified in real time, we adopt the idea of RBAC (Role-Based Access Control) to control the user’s access to the files.</p><p>The permission-related tables are designed as follows:</p><ol><li><strong>User table</strong>: stores information about system users, as above, including user ID, user name, etc.</li><li><strong>Role table</strong>: defines the roles in the system, each role includes role ID, role name and so on. Common roles are Supervisor, Normal User, Buddy, Read-Only User, Restricted User, and so on.</li><li><strong>UserRole table</strong>: establishes the association between users and roles, records which users have which roles, including user ID and role ID. 4. <strong>File table</strong>: defines the roles in the system.</li><li><strong>File table</strong>: represents the file metadata information in the system, as above, including file ID, file name, etc. 5. <strong>Permission table</strong>: represents the file metadata information in the system, including file ID, file name, etc. 5.</li><li><strong>Permission table</strong>: defines the permissions of roles on resources, including permission ID, role ID, user ID, file ID, expiration time and so on.</li></ol><p>Through the mechanism of RBAC, we can easily manage users’ permissions on resources, assign permissions according to roles, and reclaim permissions when needed.</p><h3 id="Share-a-file-with-a-friend"><a href="#Share-a-file-with-a-friend" class="headerlink" title="Share a file with a friend"></a>Share a file with a friend</h3><p>With RBAC for permission control, the business process of registering an account and uploading files to share with friends is as follows:</p><ol><li><strong>User registration and login</strong>:<ol start="2"><li></li></ol></li></ol><ul><li>Assign roles to users and insert the related records into the UserRole table, which is initially an ordinary user role that can upload, download, share and manage their files.</li><li>The user creates an account through the registration function and its information is stored in the User table.</li></ul><ol><li><strong>Creating and sharing files</strong>:<ol start="2"><li></li></ol></li></ol><ul><li>Users can create files or folders, and information about these resources is stored in the File table.</li><li>When a user wishes to share a file, he or she can choose to specify who to share it with (other users or friends) and give the file permissions.</li></ul><ol><li><strong>Permission Assignment</strong>:<ol start="2"><li></li></ol></li></ol><ul><li>The file owner grants permissions to a specific buddy by inserting the buddy’s user ID, what role they are assigned, and the corresponding file ID into the Permission table.</li><li>For example, the File Owner role can have full access, while the Buddy role can have Dump permission and Continue Sharing permission, and the Read-Only role can have access only. 1. <strong>File Access</strong>: 2.</li></ul><ol><li><strong>File Access</strong>:<ol start="2"><li></li></ol></li></ol><ul><li>When a user tries to access a file, the system checks the user’s own role permissions [to determine if the user is an offending user, or a restricted user], as well as file-related permissions. If the user’s role has file-specific permissions, the user’s role can be used to access the file.</li><li>If the user’s role has file-specific permissions (for example, read or write permissions), the user is allowed to access the file.</li></ul><h3 id="Link-sharing"><a href="#Link-sharing" class="headerlink" title="Link sharing"></a>Link sharing</h3><p>The overall process is similar to buddy sharing, the only difference is that when inserting a record into the Permission table, you can set the permissions of the file to public access, the corresponding user is set to NULL, and all users can access and dump the file by default.</p><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">insert into</span> permission (file_id, role_id, user_id) <span class="keyword">values</span> (<span class="string">&#x27;被共享的文件ID&#x27;</span>, <span class="string">&#x27;公开角色的ID&#x27;</span>, <span class="keyword">NULL</span>)</span><br></pre></td></tr></table></figure><p>In this way, when the user accesses the file and determines that the file’s permissions are public access, he or she can access or dump the shared file.</p><h3 id="Privilege-reclamation"><a href="#Privilege-reclamation" class="headerlink" title="Privilege reclamation"></a>Privilege reclamation</h3><p>When a resource owner or administrator decides to reclaim a user’s or role’s permissions on a resource, the system deletes the related permission records.</p><p>The implementation is to add a new expiration time field in the Permission table, so that when a user shares a file with a friend or generates a link to share it, he&#x2F;she needs to set a specific expiration time.</p><p>The file system can enable a timed task to clean up expired permissions periodically to ensure that files can only be accessed by users within the validity period.</p><blockquote><p>If you set a link to be accessible indefinitely, you can set the expiration time to some point in the past.</p></blockquote><h3 id="File-deletion"><a href="#File-deletion" class="headerlink" title="File deletion"></a>File deletion</h3><p>When user deletes a file, we first need to get the file block list via FMM’s interface, then delete the metadata information to free up user’s storage space, and at the same time transfer the deleted file block list to FCM via message queue to delete the file content.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/7f0029f7b91e47fead6b999c36f5043d%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>In order to ensure the transactional consistency of file metadata and file content, we adopt the idea of <strong>maximum effort notification</strong> in distributed transactions.</p><p>It is implemented as follows: a new monitoring and alerting system is added so that when file content deletion fails, SMS or email can be used to notify the administrator to manually handle the unsynchronized data.</p><p>To the user, the file metadata is no longer visible, so the file content and file metadata only need to ensure eventual consistency.</p><h1 id="5-Summary"><a href="#5-Summary" class="headerlink" title="5. Summary"></a>5. Summary</h1><p>Currently, the Internet market is highly competitive, and the netdisk field is no exception.</p><p>There are many domestic netdisk providers, such as Baidu.com, Tencent Micro Cloud, 360 Cloud Disk, etc. However, Baidu.com is still a dominant player, with more than 80% of the market share.</p>]]></content>
    
    
    <summary type="html">Baidu.com is a popular cloud storage and file sharing platform with over 800 million users and up to 10w+PB of storage capacity. In this article, we will delve into the core functionality of the Baidu Netdisk system and how to deal with the challenges that can arise from high concurrency and massive storage.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Architecture" scheme="https://www.nablepart.com/tags/Architecture/"/>
    
    <category term="Backend, , Design" scheme="https://www.nablepart.com/tags/Backend-Design/"/>
    
    <category term="Design" scheme="https://www.nablepart.com/tags/Design/"/>
    
    <category term="massive" scheme="https://www.nablepart.com/tags/massive/"/>
    
  </entry>
  
  <entry>
    <title>Come on, get a taxi system.</title>
    <link href="https://www.nablepart.com/6e5fabe7c53a/"/>
    <id>https://www.nablepart.com/6e5fabe7c53a/</id>
    <published>2023-11-05T23:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><strong>Catalog</strong></p><ol><li><p>Introduction</p></li><li><p>Netiquette System</p><ol start="3"><li></li></ol></li><li><p>Requirements Design</p></li><li><p>Outline Design</p></li><li><p>Detailed Design</p></li><li><p>Experience Optimization</p></li><li><p>Summary</p></li></ol><h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-Typhoon"><a href="#1-1-Typhoon" class="headerlink" title="1.1 Typhoon"></a>1.1 Typhoon</h2><p>Last week, Shenzhen was affected by Typhoon Sura, and from 12:00 on September 1, the city started the first-level emergency response for typhoon and flood control.</p><p>The specific impact on the workers in Shenzhen, the day from 4 p.m. in the city to implement the “five stops”: stopping work, stopping business, stopping the city, the day has been closed, after 7 p.m. stopping transportation.</p><p>Since the market was closed at 4 pm, most companies left work early. Some of them left work in a hurry, like this:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/9b42790c20bd47ea8eda2dd91b298b73%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>There are early dismissals, like this:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/713603fe04df40a791d496bee1fbc2d5%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>And then there are those like us who have to telecommute from home:</p><h2 id="1-2-Crashing-a-taxi"><a href="#1-2-Crashing-a-taxi" class="headerlink" title="1.2 Crashing a taxi"></a>1.2 Crashing a taxi</h2><p>It’s around 4pm and the buses and subways are packed.</p><p>So I thought I’d take a cab home when I was about to leave work (and work from home), but then I opened the drop:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/fc6f19e60ac64366823455031e6ad011%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>There were 142 people in line, and the size and length of the line made my heart skip a beat.</p><p>Based on historical experience, the response time for calling the car on a rainy day has to be pushed back by about half an hour. What’s more, it was typhoon weather!</p><p>DDT ah DDT, you can not prepare in advance, this waiting time, will make you lose a lot of order share.</p><p>But on the other hand, this kind of emergency warning, can not completely blame the taxi platform, after all, the vehicle scheduling is also need a certain amount of time. In this kind of everyone scrambling to escape (bushi time, around the vehicle estimate is not quite enough.</p><h3 id="Roll-up"><a href="#Roll-up" class="headerlink" title="Roll up"></a>Roll up</h3><p>It was a long wait, so I went back to the office to continue reading technical articles. At this point, it occurred to me that after this emergency vehicle dispatch, if I were a development engineer at DDT, how would I need to handle this situation?</p><p>If the interviewer from DDT was in front of me, how would he consider the candidate’s technical depth and product thinking?</p><h1 id="2-Design-a-“net-car-system”"><a href="#2-Design-a-“net-car-system”" class="headerlink" title="2. Design a “net car system”"></a>2. Design a “net car system”</h1><p>Interviewer: “You’ve used DDT before, right? In your resume, you wrote that you know architecture design, right? If you are asked to design an online car system, what aspects will you consider?”</p><h2 id="2-1-Requirements-Analysis"><a href="#2-1-Requirements-Analysis" class="headerlink" title="2.1 Requirements Analysis"></a>2.1 Requirements Analysis</h2><p>The core function of a network car system (such as DDT) is to send the passenger’s taxi order to the attached network car driver, the driver receives the order, picks up the passenger at the boarding point, and the passenger gets off the car to complete the order.</p><p>Among them, the driver takes a share (ranging from 70%-80%) through a percentage agreed upon by the platform, and the passenger can open a secret-free payment based on the credit value of a third-party platform (e.g., Alipay) to automatically pay for the order after getting off the bus. The use case diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/47d20c8d266040b391ef2d32f0d6f7d9%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Both passengers and drivers have a registration and login function, which belongs to the passenger user module and driver user module. The other core functions of the online taxi system are passenger taxi, order allocation, and driver delivery.</p><h2 id="2-2-Outline-Design"><a href="#2-2-Outline-Design" class="headerlink" title="2.2 Outline Design"></a>2.2 Outline Design</h2><p>Net taxi system is a model of Internet + shared resources, the purpose is to combine vehicles and passengers, a way to save the existing resources, usually a net taxi to multiple users.</p><p>So for passengers and drivers, their interaction with the system is different. For example, a person may only take a taxi once a day, while a driver has to make several trips a day.</p><p>Therefore, we need to develop two APP applications, respectively, for passengers and drivers to taxi and take orders, the architecture diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/64aa77b44ae24079a121b3846852df8e%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h3 id="1-Passenger-perspective"><a href="#1-Passenger-perspective" class="headerlink" title="1) Passenger perspective"></a>1) Passenger perspective</h3><p>As shown above: after registering as a user in the cell phone App, the passenger can choose the origin and destination to take a taxi.</p><p>The taxi request passes through the load balancing server, goes through a series of filters such as request forwarding, and then arrives at the HTTP gateway cluster, and then the gateway cluster carries out business verification and calls the corresponding micro-services.</p><p>For example, a passenger obtaining personal user information, collected address information, etc. on a cell phone can forward the request to <strong>User System</strong>. When they need to call a taxi, they will send the information of origin, destination, personal location, etc. to <strong>Taxi System</strong>.</p><h3 id="2-Driver’s-view"><a href="#2-Driver’s-view" class="headerlink" title="2) Driver’s view"></a>2) Driver’s view</h3><p>As shown in the above figure: after the driver registers as a user in the cell phone App and starts to take orders, he&#x2F;she opens the location information of his&#x2F;her cell phone and sends his&#x2F;her location information to the platform at regular intervals through the <strong>TCP long connection</strong>, and at the same time, receives the order messages issued by the platform.</p><blockquote><p>The Driver App uses TCP long connection because it has to send and receive system messages at regular intervals, if HTTP push is used:<br>HTTP push: &gt; On one hand, it has an impact on the real-time performance, on the other hand, it is not decent to re-establish the connection every time we communicate (it is resource-consuming). </p></blockquote><p>Driver App: sends the current location information to the platform every 3~5 seconds, including vehicle latitude and longitude, vehicle direction, etc. The TCP server cluster is equivalent to a gateway, which only provides access service to the App by means of TCP long connection, and the geolocation service is responsible for managing the driver’s location information.</p><h3 id="3-Order-Receiving"><a href="#3-Order-Receiving" class="headerlink" title="3) Order Receiving"></a>3) Order Receiving</h3><p>The gateway cluster acts as the <strong>registration center of the business system and is responsible for security filtering, business flow limitation, and request forwarding</strong>.</p><p>The service consists of one independently deployed gateway server. When there are too many requests, the traffic pressure can be dispersed to different gateway servers through the load balancing server.</p><p>When a user takes a taxi, the request is called to one of the gateway servers through the load balancing server. The gateway will first call the order system to create a taxi order for the user (the order status is “created”) and store it in the library.</p><p>Then the gateway server calls the taxi system, which encapsulates user information, user location, origin, destination and other data into a message packet, sends it to a message queue (e.g., RabbitMQ), and waits for the system to assign a driver for the user order.</p><h3 id="4-Order-allocation"><a href="#4-Order-allocation" class="headerlink" title="4) Order allocation"></a>4) Order allocation</h3><p>The <strong>Order Allocation System</strong>, as a consumer of the message queue, listens for orders in the queue in real time. When a new order message is acquired, the order allocation system modifies the order status to “order allocation in progress” and stores it in the queue.</p><p>Then, the order allocation system sends the user information, user location, origin and destination to the <strong>Order Push SDK</strong>.</p><p>Then, the order push SDK calls the geolocation system to get the real-time location of the driver, and then combines it with the user’s boarding point to select the most suitable driver for dispatching the order, and then sends the order message to the <strong>Message Alert System</strong>. At this point, the order distribution system modifies the order status to “Driver has taken the order” status.</p><p>The order message is pushed through the specialized message alert system, and the order is pushed to the cell phone App of the matched driver through TCP long connection.</p><h3 id="5-Order-Rejection-and-Order-Grabbing"><a href="#5-Order-Rejection-and-Order-Grabbing" class="headerlink" title="5) Order Rejection and Order Grabbing"></a>5) Order Rejection and Order Grabbing</h3><p>When the order push SDK assigns a driver, it will consider whether the driver’s current order is completed or not. When the most suitable driver is assigned, the driver can also choose to “reject” the order according to his own situation, but the platform will record it to evaluate the driver’s efficiency in taking orders.</p><p>In the taxi platform, if a driver refuses too many orders, he or she may reduce the weight score of the <strong>allocated orders</strong> for a subsequent period of time, affecting his or her own performance.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/9a4547025ada4cd09af073b873f8bd93%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>The order assignment logic can also be modified to allow additional drivers to grab orders, the specific implementation is:</p><p>When the order is created, the order push SDK will push the order message ** to the driver’s App** within a certain geographic range, the driver within the range can grab the order after receiving the order message, and the order status will change to “dispatched” after the order grabbing is completed.</p><h2 id="2-3-Detailed-Design"><a href="#2-3-Detailed-Design" class="headerlink" title="2.3 Detailed Design"></a>2.3 Detailed Design</h2><p>For the detailed design of the taxi platform, we will focus on some core functions of the taxi system, such as: long connection management, address algorithm, experience optimization and so on.</p><h3 id="1-Advantages-of-Long-Connection"><a href="#1-Advantages-of-Long-Connection" class="headerlink" title="1) Advantages of Long Connection"></a>1) Advantages of Long Connection</h3><p>In addition to the HTTP short connection request commonly used on web pages, such as: Baidu search, enter a keyword to initiate a HTTP request, this is the most commonly used short connection.</p><p>However, large-scale APPs, especially those involving message push (such as QQ, WeChat, and Meituan), almost always build a complete set of <strong>TCP long connection</strong> channels.</p><p>A chart to see the advantages of long connections:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/995f39028c424c90b5cea254a235dbbb%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Image Credit: “Mobile Network Optimization Practices in MMT</p><p>With the above figure, we conclude. Compared with short connections, long connections have three advantages:</p><ol><li>high connection success rate</li><li>low network latency</li><li>stable sending and receiving of messages, not easy to be lost</li></ol><h3 id="2-Long-Connection-Management"><a href="#2-Long-Connection-Management" class="headerlink" title="2) Long Connection Management"></a>2) Long Connection Management</h3><p>As mentioned earlier, the advantages of long connection are high real-time performance and stable sending and receiving of messages. In the taxi system, drivers need to send their location information regularly and receive order data in real-time, so the **Driver App adopts TCP long connection to access the **system.</p><p>Unlike the HTTP stateless connection, the TCP long connection is a stateful connection. The so-called statelessness means that each user request can be sent to a server at will, and the return from each server is the same, so the user does not care which server handles the request.</p><blockquote><p>Of course, now HTTP2.0 can also be a stateful long connection, we default to HTTP1.x here.</p></blockquote><p>In order to ensure the transmission efficiency and real-time, the server and the user’s cell phone app need to maintain the state of the long connection, i.e., <strong>stateful</strong> connection.</p><p>So every time a driver App reports information or pushes a message, it passes through a specific connection channel, and the connection channel for the driver App to receive messages and send messages is fixed.</p><p>Therefore, the TCP long connection on the driver’s side needs to be managed specifically to handle the connection information between the Driver App and the server, and the architecture diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/4a3533b0592e4fb69939c60096c56616%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>In order to ensure that every time a message is received and pushed, the corresponding channel can be found, we need to maintain a mapping relationship between the Driver App and the TCP server, which can be saved in Redis.</p><p>When the Driver App logs in for the first time, or disconnects from the server (e.g. the server is down, the user switches networks, the mobile app is closed in the background, etc.) and needs to be reconnected, the <strong>Driver App will re-apply for a server connection through the user’s Long Connection Management system</strong> (the available address is stored in Zookeeper), and then refresh Redis’s cache after connecting to the server over TCP.</p><h3 id="3-Address-Algorithm"><a href="#3-Address-Algorithm" class="headerlink" title="3) Address Algorithm"></a>3) Address Algorithm</h3><p>When a passenger takes a taxi, the order push SDK will combine the driver’s geographic location with an address algorithm to calculate the most suitable driver to dispatch the order.</p><p>Currently, cell phones generally collect latitude and longitude information. The longitude range is from 180 East to 180 West, and the latitude range is from 90 South to 90 North.</p><p>We set the west longitude to be negative and the south latitude to be negative, so the longitude range on the earth is [-180, 180] and the latitude range is [-90, 90]. If we take the prime meridian, the equator, as the boundary, the earth can be divided into 4 parts.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c599b35cb09f411186ae0cdb85642b16%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>According to this principle, we can first encode the two-dimensional spatial latitude and longitude into a string to uniquely identify the location information of the driver and passenger. Then use Redis’ GeoHash algorithm to get all the driver information attached to the passenger.</p><p>The principle of the GeoHash algorithm is to ** convert the latitude and longitude of the passenger into an address-encoded string, indicating a rectangular area, through which the algorithm can quickly find all drivers in the same area **.</p><p>Its implementation uses a jump table data structure, the specific implementation is:</p><p>A piece of range of an urban area is used as the key of GeoHash, all the drivers within this urban area are stored into a jump table, and when a passenger’s geographic location appears in this urban area, all the driver information within the range is obtained. Then further filter out the nearest driver information for dispatching.</p><h3 id="4-Experience-Optimization"><a href="#4-Experience-Optimization" class="headerlink" title="4) Experience Optimization"></a>4) Experience Optimization</h3><h4 id="1-Distance-Algorithm"><a href="#1-Distance-Algorithm" class="headerlink" title="1. Distance Algorithm"></a>1. Distance Algorithm</h4><p>As an online order dispatching, the effect of distributing orders through the distance algorithm will be relatively poor, because Redis calculates the spatial distance between two points, but the driver must drive along the road, and in the complex urban road conditions, maybe the spatial distance of a few tens of meters to drive more than ten minutes is not known.</p><p>Therefore, the subsequent need for a combination of driving distance (rather than spatial distance), the driver’s head direction and the boarding point for path planning, to calculate the distance and time for each driver to reach the passenger in the region.</p><p>Further, if there are more than one passenger and driver in the area, the waiting time of all of them should be taken into account, so as to optimize the user experience, save the dispatch time, and improve the profit amount.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/297f3cab05374f94b706f099b0a99bee%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h4 id="2-Order-prioritization"><a href="#2-Order-prioritization" class="headerlink" title="2. Order prioritization"></a>2. Order prioritization</h4><p>If a taxi order is frequently canceled, it can be judged based on the driver’s or passenger’s behavior. After the judgment of responsibility, a reputation score is calculated for the passenger and driver, and the user is informed that the reputation score will affect the experience of the passenger and driver, and will be related to the priority of the dispatch order.</p><h5 id="Driver-Prioritization"><a href="#Driver-Prioritization" class="headerlink" title="Driver Prioritization"></a>Driver Prioritization</h5><p>Considering the driver’s reputation score, the number of complaints, the number of orders received by the driver, etc., to assign different priority to drivers with different reputation scores.</p><h5 id="Passenger-Order-Prioritization"><a href="#Passenger-Order-Prioritization" class="headerlink" title="Passenger Order Prioritization"></a>Passenger Order Prioritization</h5><p>According to the passenger’s taxi time period, taxi distance, boarding point and other information, to make a user profile, in order to rationalize the arrangement of drivers, or appropriate killing (bushi.)</p><p>PS: Some bad taxi platforms are doing this 🐶 _ Even a taxi platform was reported to charge differently according to different cell phone systems_.</p><h1 id="4-Summary"><a href="#4-Summary" class="headerlink" title="4. Summary"></a>4. Summary</h1><h2 id="4-1-Development-of-Internet-Ridesharing-Platforms"><a href="#4-1-Development-of-Internet-Ridesharing-Platforms" class="headerlink" title="4.1 Development of Internet Ridesharing Platforms"></a>4.1 Development of Internet Ridesharing Platforms</h2><p>Currently, the global online taxi market has reached hundreds of billions of dollars, and the main competitors include companies such as DDT, Uber, and Grab. In China, as the largest online car rental platform, DDT has already occupied the majority of the market share.</p><p>The core business logic of online dating is relatively simple, with the main stakeholders being the platform, drivers, vehicles, and consumers.</p><p>The platform respectively docking drivers, vehicles [non-essential, there are many drivers are with the car on duty] and passengers, through the effective matching of supply and demand to earn the money saved by the entire sharing economic chain.</p><p>Specifically manifested as: passengers and drivers respectively through the network contracting platform taxi and orders, the platform to provide technical support. Passengers pay for the taxi service, and the platform takes a cut (ranging from 10%-30%) from the transaction amount.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/1a6f91e71a04453fba0edf05eea79d13%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>According to the National Interactive Platform for Network Rental Vehicle Supervision and Information, as of the end of February 2023, a total of 303 online vehicle platform companies nationwide have obtained licenses to operate online vehicle platforms.</p><p>These platforms are partly <strong>Netjob aggregation platforms</strong> relying on Gaode Taxi, Baidu Maps, and Meituan Taxi; and partly <strong>travel platforms</strong> relying on DDT, Flower Pig, and T3.</p><h2 id="4-2-Current-Situation-of-Netiquette-Platforms"><a href="#4-2-Current-Situation-of-Netiquette-Platforms" class="headerlink" title="4.2 Current Situation of Netiquette Platforms"></a>4.2 Current Situation of Netiquette Platforms</h2><p>With the unblocking of traveling, the net taxi platforms have come back to life.</p><p>However, because the entry threshold of some net car aggregation platforms is too low, more and more problems have been exposed in the past period of time. For example, the low compliance rate of vehicles and drivers, encountering safety accidents, generating liability disputes, and the difficulty of passengers’ rights defense, and so on.</p><p>Due to its special model, it has been straying from the edge of the law due to its liability boundary problem with online car rental operators.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c9af693b1aa14971994aa1bb8acda2ad%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>However, as the regulation of online taxi aggregation platforms continue to fall, the country has traveled a certain regulatory regulations.</p><p>For example, a ride-hailing platform requires vehicles to keep the <strong>communication records of drivers and passengers on file</strong>, in addition to the online communication records of drivers and passengers must be preserved, but also a voice phone or in-vehicle recording conversion, to be kept for a period of time for inspection.</p><p>With these humane regulations and continuous technological innovations, the online taxi platform may continue to thrive for some time to come.</p><h3 id="Afterword"><a href="#Afterword" class="headerlink" title="Afterword"></a>Afterword</h3><p>Interviewer: Well, specialized and red, all-round development! This young man is good, attention~</p>]]></content>
    
    
    <summary type="html">You&#39;ve used dropshipping, right? See you wrote in your resume that you know architecture design right, if you were asked to design an online car system, what aspects would you consider</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="Architecture" scheme="https://www.nablepart.com/tags/Architecture/"/>
    
    <category term="resume" scheme="https://www.nablepart.com/tags/resume/"/>
    
    <category term="dropshipping" scheme="https://www.nablepart.com/tags/dropshipping/"/>
    
  </entry>
  
  <entry>
    <title>Come on, get a bus ride system.</title>
    <link href="https://www.nablepart.com/735fc2e7369f/"/>
    <id>https://www.nablepart.com/735fc2e7369f/</id>
    <published>2023-11-05T22:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h1 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h1><h2 id="1-1-The-Daily-Commute-to-Work"><a href="#1-1-The-Daily-Commute-to-Work" class="headerlink" title="1.1 The Daily Commute to Work"></a>1.1 The Daily Commute to Work</h2><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/035f8e94ab4b44a0a51d20ec203f3e9e%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Suddenly, you realize that you’re already running late, so you rush into the bathroom like a flash, speed wash, get dressed in a hurry, copy your phone with your left hand, pick up the bread with your right hand, and nibble on your breakfast while getting dressed.</p><p>At this time, the old problem of commuting is again in front of you: do you want to finish this mouthful of bread, brush your teeth and wash your face, or do you want to rush out the door first to catch the bus?</p><p>It’s hard to make a tough decision - put down the bread and rush out the door at a fast pace. You took out your phone and tapped on the familiar Metro Ride App or Bus Metro Ride Code app.</p><p>**Then, a QR code lights up on the screen, which is the “knocking brick” for your daily commute. **</p><p>You walk quickly to the subway station, the cell phone QR code scanned on the gate, “whoosh” sound, the gate opens, you easily through, no longer need to queue up to buy a ticket, no longer by the morning rush hour congestion disturbing.</p><p>You walk into the subway car, squeeze into a corner, take out your cell phone and start planning your day.</p><h2 id="1-2-Bus-Subway-Ridership-System"><a href="#1-2-Bus-Subway-Ridership-System" class="headerlink" title="1.2 Bus &amp; Subway Ridership System"></a>1.2 Bus &amp; Subway Ridership System</h2><p>As mentioned above, people only need a cell phone and a QR code to complete all the matters of commuting to work.</p><p>So how is this convenient bus or subway ride system designed? How does the technology and architecture behind it support your daily commute?</p><p>Today let’s unveil this modern **city hit workers commuting small can **, in-depth discussion ** ride system design and implementation **.</p><p>In this post, Xiao ❤ will take you into the world of ridesharing system to find out how it came out from a science fiction movie and became an integral part of our daily life in just a few years.</p><h1 id="2-Requirements-Design"><a href="#2-Requirements-Design" class="headerlink" title="2. Requirements Design"></a>2. Requirements Design</h1><h2 id="2-1-Functional-Requirements"><a href="#2-1-Functional-Requirements" class="headerlink" title="2.1 Functional Requirements"></a>2.1 Functional Requirements</h2><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/14e34676deb04c68bf9588c23ea9d697%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ul><li><strong>User Registration and Login:</strong> Users can register for an account through the mobile app or applet and use the account to log in to the system.</li><li><strong>Route Inquiry:</strong> Users can inquire the line and station information of the subway, including departure time, ticket price, etc.</li><li><strong>Get Ride QR Code:</strong> The system generates the ride QR code according to the user’s information.</li><li><strong>Get the real-time location of the subway:</strong> Users can query the real-time location of the subway and check how long the subway is still arriving from the current platform.</li><li><strong>Ride Scanning and Auto Payment:</strong> Users can complete the ride by scanning the QR code when entering and exiting the station, and the system automatically calculates the fare and makes payment based on the mileage of the ride.</li><li><strong>Transaction Record Inquiry:</strong> Users can inquire about their transaction history, including ride time, amount, route and other information.</li></ul><h2 id="2-2-Non-functional-Requirements-of-Ridesharing-System"><a href="#2-2-Non-functional-Requirements-of-Ridesharing-System" class="headerlink" title="2.2 Non-functional Requirements of Ridesharing System"></a>2.2 Non-functional Requirements of Ridesharing System</h2><p>The user volume of the ride-sharing system is very large, according to the “China’s Major Cities Commuting Detection Report-2023”, the number of people in first-tier cities who take the bus &amp; subway to work every day is generally more than ten million, with an average commuting time of 45-60 minutes, and concentrated in the morning peak and evening peak hours.</p><p>Therefore, when designing a ride-sharing system with non-uniform hotspot data distribution and non-uniform crowd distribution, the following points need to be considered:</p><ul><li><strong>User distribution is non-uniform</strong>, the users of the ridesharing system in the first-tier cities, exceeding the ordinary cities by several orders of magnitude.</li><li><strong>Uneven distribution of time</strong>, the original design of the ride-sharing system is to facilitate commuting to and from work, so the number of users in the morning and evening peaks will be a few orders of magnitude higher than other time periods.</li><li><strong>High Concurrency:</strong> Considering that the bus&#x2F;subway system may be used by a large number of users at the same time during peak hours, the system needs to have high concurrency processing capability.</li><li><strong>High Performance:</strong> In order to provide fast query and payment services, the system needs to have high performance and the response time should be as short as possible.</li><li><strong>Scalability:</strong> As the number of users increases, the system should be easily scalable to meet future needs.</li><li><strong>Availability:</strong> The system needs to guarantee 24&#x2F;7 availability to provide services at any time.</li><li><strong>Security and Privacy Protection:</strong> The system needs to ensure the security and privacy of user data, including the protection of payment information and personal information.</li></ul><h1 id="3-Outline-Design"><a href="#3-Outline-Design" class="headerlink" title="3. Outline Design"></a>3. Outline Design</h1><h2 id="3-1-Core-Components"><a href="#3-1-Core-Components" class="headerlink" title="3.1 Core Components"></a>3.1 Core Components</h2><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c4fce28308aa402eab0621f7195d5608%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ul><li><strong>Front-end Application:</strong> Develop mobile App and small program, provide user registration, login, query and other functions.</li><li><strong>Backend Service:</strong> Design backend service, including user management, route query, QR code management, order processing, payment system and so on.</li><li><strong>Database:</strong> Use relational database MySQL cluster to store user information, route information, transaction records and other data.</li><li><strong>Push System:</strong> Push the payment result after the ride to the user’s cell phone in both online and offline ways.</li><li><strong>Load Balancing and Message Queuing:</strong> Consider using load balancing and message queuing techniques to improve system performance.</li></ul><h2 id="3-2-Ride-Process"><a href="#3-2-Ride-Process" class="headerlink" title="3.2 Ride Process"></a>3.2 Ride Process</h2><h3 id="1-Interaction-between-user’s-cell-phone-and-backend-system"><a href="#1-Interaction-between-user’s-cell-phone-and-backend-system" class="headerlink" title="1) Interaction between user’s cell phone and backend system"></a>1) Interaction between user’s cell phone and backend system</h3><p>The interaction timing diagram is as follows:</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/87ce170b5d314f3c835c59a10fb45aef%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p><strong>1. User registration and login:</strong> Users first need to register on the mobile application and login to the system, providing personal information, including user name, cell phone number, payment method, etc.</p><p><strong>2. Querying Ride Information:</strong> Users can use the mobile app to query bus&#x2F;subway routes and fare information, and users can choose the appropriate route according to their travel needs.</p><p><strong>3. Generate QR Code for Riding:</strong> After the user logs in, the system will generate a QR code for riding, which can be viewed on the user’s cell phone at any time. This QR code is the common ride QR code for urban public transportation system, and at the same time, the code is associated with the user’s account and payment method, and the user can use it to take any bus or subway at any time.</p><h2 id="2-Interaction-between-the-user’s-phone-and-the-bus"><a href="#2-Interaction-between-the-user’s-phone-and-the-bus" class="headerlink" title="2) Interaction between the user’s phone and the bus"></a>2) Interaction between the user’s phone and the bus</h2><p>The interaction UML state diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/391ee91dda6a4880b0b0d1705a764552%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ol><li>**</li><li>User Entry Code Scanning:** When a user enters a subway station, they scan the ride code from their cell phone on the entry device. This device sends the scanned ride code to the backend system.</li><li><strong>Inbound Data Processing:</strong> When the backend system receives the inbound information, it verifies the validity of the ride code, checks whether the user has a record of entering the station, and records the time and location of the entry.</li><li><strong>User Outbound Code Scanning:</strong> The user scans the ride code on the cell phone on the outbound device after the ride is finished.</li><li><strong>Outbound Data Processing:</strong> When the backend system receives the outbound information, it will verify the validity of the ride code, check whether the user has a corresponding inbound record, and record the time and location of the outbound.</li></ol><h3 id="3-Processing-by-the-backend-system"><a href="#3-Processing-by-the-backend-system" class="headerlink" title="3) Processing by the backend system"></a>3) Processing by the backend system</h3><ol><li><strong>Ride Fee Calculation:</strong> Based on the user’s inbound and outbound locations and ride rules, the backend system calculates the ride fee. This fee can vary according to different cities and operators.</li><li><strong>Ride Fee Recording and Deduction:</strong> The system records the ride fee and deducts the fee from the user’s payment method (e.g., Alipay or WeChat Wallet).</li><li><strong>Ride Record Storage:</strong> All ride records, including information on inbound and outbound stops, fees, etc., are stored in the ride record table for users to view and service providers to bill.</li><li><strong>Notification to Users:</strong> If necessary, the system can send notifications to users that their rides have been deducted.</li><li><strong>Database Interaction:</strong> Throughout the process, the system needs to interact with the database to store and retrieve data such as user information, ride logs, and fare information.</li></ol><h1 id="3-Detailed-Design"><a href="#3-Detailed-Design" class="headerlink" title="3. Detailed Design"></a>3. Detailed Design</h1><h2 id="3-1-Database-Design"><a href="#3-1-Database-Design" class="headerlink" title="3.1 Database Design"></a>3.1 Database Design</h2><ul><li>** User information table (User)** , including user ID, cell phone number, password, payment method, creation time, etc..</li><li><strong>Table of QR Code (QRCode)</strong> , including QR Code ID, User ID, City ID, Generation Time, Validity Period and QR Code Data, etc..</li><li>** Vehicle &amp; Metro Train Table (Vehicle)**, including vehicle ID, license plate or metro train number, model (bus, metro), scanning device serial number, etc.</li><li><strong>TripRecord (TripRecord)</strong>, including Record ID, User ID, Vehicle ID, boarding and alighting time, starting and ending stops.</li><li><strong>PaymentRecord (PaymentRecord)</strong>, including Payment ID, Ride ID, Transaction Time, Transaction Amount, Payment Method, Payment Status and so on.</li></ul><p>The above are some of the basic information of the database tables and their fields that need to be designed in the bus &amp; subway ride system, and then according to the specific needs and system size, the table structure and field design can be further optimized to meet the performance and scalability requirements.</p><p>Detailed design In addition to designing the table structure, we also discuss for two core issues:</p><ul><li><p>Shortest route query</p></li><li><p>Ride QR code management</p></li></ul><h2 id="3-2-Shortest-route-query"><a href="#3-2-Shortest-route-query" class="headerlink" title="3.2 Shortest route query"></a>3.2 Shortest route query</h2><p>According to the bus &amp; subway routes given by the transportation department, we can draw the following station map:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/665b07f1e7eb403f99a9d0ae0eb834b5%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>Assuming that there are stations A-F in the figure, the transportation involved in the subway line 1 and 2 buses, the user’s starting point and the end point for the A, F, respectively. We can use Dijkstra’s algorithm to find the shortest path between the two points, the specific steps are:</p><p>Steps have traversed the set of untraversed set 1 selected into A, at this time the shortest path A-&gt;A &#x3D; 0, and then A as the middle point, start looking for the next neighboring nodes {B, C, D, E, F}, which is adjacent to A nodes B and C, AB &#x3D; 6, AC &#x3D; 3. Next, select the shorter path node C start traversing 2 selected C, A-&gt;C &#x3D; 3, at this time has traversed the set of {A, C}, in order to A and C as the middle point, start to find the next neighboring nodes {B, D, E, F}, where the nodes adjacent to A and C are B and D, AB &#x3D; 6, ACD &#x3D; 3 + 4 &#x3D; 7. Next, select the shorter path node B to start traversing 3 to select B, A -&gt; B &#x3D; 6, at this time the traversal set has been traversed for {A, C, B}, the neighboring nodes of A have already been traversed to the end, and start to look for nodes {D, E, F} and the nodes adjacent to B and C. nodes {D, E, F}, where the nodes adjacent to B, C are D. Node D already has a distance record (7) before, and now the new optional path is ABD&#x3D;6+5&#x3D;11. Obviously the first path is shorter, so the nearest distance 7 of D is added to the set.4 Pick D, A-&gt;D&#x3D;7, at this point the set has been traversed as {A, C, B, D}, and look for the nodes adjacent to D {E , F}, where DE&#x3D;2, DF&#x3D;3, select the node E of the nearest path to be added to the set 5 select E, A-&gt;E&#x3D;7+2&#x3D;9, at this point the set has been traversed as {A, C, B, D, E}, continue to search for the node {F} that is close to D and E, where DF&#x3D;3, DEF&#x3D;2+5&#x3D;7, so that the nearest distance of F is 7+3&#x3D;10.6 select F, A-&gt;F&#x3D;10, at this point The traversal set is {A, C, B, D, E, F} all nodes have been traversed, from point A, their nearest distances are {A&#x3D;0, C&#x3D;3, B&#x3D;6, D&#x3D;7, E&#x3D;9, F&#x3D;10}</p><p>Before the user queries the route, the transportation department inputs the latitude and longitude information of the bus &amp; subway stops into the <strong>Route Management System</strong> and stores the corresponding stops according to the two-dimensional spatial latitude and longitude encoding.</p><p>We set the west longitude to be negative and the south latitude to be negative, so the range of longitude on the earth is [-180, 180] and the range of latitude is [-90, 90]. If we take the prime meridian and equator as the boundary, the earth can be divided into 4 parts.</p><p>Based on this principle, we can first encode the two-dimensional spatial latitude and longitude into a string to uniquely identify the location information of the user or site. Then we can use Redis’ GeoHash algorithm to get information about all the sites near the user’s starting point.</p><p>The GeoHash algorithm works by <strong>converting the latitude and longitude of a location into an address-encoded string that represents a rectangular area, and using this algorithm to quickly find all sites in the same area</strong>.</p><p>Once the latitude and longitude of the starting location is obtained, the system can call the route management system to find the best bus or subway route based on the information of nearby stations.</p><p>Once the user selects a route, the navigation engine launches and provides real-time navigation guidance. The navigation engine may use map data and GPS positioning to guide the user to the starting and ending stops.</p><h2 id="3-3-Ride-QR-Code-Management"><a href="#3-3-Ride-QR-Code-Management" class="headerlink" title="3.3 Ride QR Code Management"></a>3.3 Ride QR Code Management</h2><p>Ride codes are generated using QR code (Quick Response Code) technology, which can hold more information and represent more types of data than traditional Bar Code barcodes.</p><p>The generation of QR codes is very simple, using the Go language as an example, and only requires the introduction of a three-way library:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> <span class="string">&quot;github.com/skip2/go-qrcode&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    qr,err:=qrcode.New(<span class="string">&quot;https://mp.weixin.qq.com&quot;</span>,qrcode.Medium)</span><br><span class="line"><span class="keyword">if</span> err != <span class="literal">nil</span> &#123;</span><br><span class="line">    log.Fatal(err)</span><br><span class="line">&#125; <span class="keyword">else</span> &#123;</span><br><span class="line">    qr.BackgroundColor = color.RGBA&#123;<span class="number">50</span>,<span class="number">205</span>,<span class="number">50</span>,<span class="number">255</span>&#125; </span><br><span class="line">    qr.ForegroundColor = color.White </span><br><span class="line">    qr.WriteFile(<span class="number">256</span>,<span class="string">&quot;./wechatgzh_qrcode.png&quot;</span>) </span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The following is a detailed description of the interaction between the user and the system of this function, QR code information storage, and high concurrency request processing:</p><ol><li><strong>User-System Interaction:</strong> The user first logs in on the cell phone App, and the system verifies the user’s identity and payment method. Once the verification is successful, the system dynamically generates a QR code based on the user’s identity information and payment method, and this QR code contains the user’s identification information and related ride parameters.</li><li><strong>QR code information storage:</strong> The generated QR code information needs to be stored and associated in the background. Usually, this information is stored in a specialized database table that contains the following fields:<ol start="3"><li></li></ol></li></ol><ul><li>QR Code ID: Primary key ID that uniquely identifies a QR code.</li><li>User ID: Unique identifier of the user associated with the ride code.</li><li>QR Code Data: the content of the QR code, including user information and ride-sharing parameters.</li><li>Generation time: the time stamp of the QR code generation, used for subsequent verification and management.</li><li>Expiration date: The expiration date of the QR code, usually a time limit is set to ensure security.</li></ul><ol start="4"><li><strong>High Concurrency Request Handling:</strong> In a high concurrency situation, a large number of users will generate and scan the QR code at the same time, so some strategies are needed to handle these requests:<ol start="5"><li></li></ol></li></ol><ul><li><strong>Load Balancing:</strong> The backend system can use load balancing technology to spread the requests to multiple servers to share the load of the servers.</li><li><strong>Cache Optimization:</strong> The generation of QR codes is a relatively time-consuming operation. Redis can be used to cache the generated QR codes to avoid repeated generation.</li><li><strong>Limit Frequency:</strong> In order to prevent abuse, you can limit the frequency of QR code generation for each user, for example, only 5 times per minute are allowed, which can be realized by limiting the flow.</li></ul><p>In conclusion, to generate ride codes through QR code technology, the backend system needs to have the capability of high concurrency processing, including load balancing, caching and frequency limiting strategies to ensure that users can quickly obtain valid ride QR codes.</p><p>At the same time, the QR code information needs to be stored and managed securely, e.g., encrypted storage to protect users’ privacy and payment information.</p><h1 id="4-Ride-sharing-system-development"><a href="#4-Ride-sharing-system-development" class="headerlink" title="4. Ride-sharing system development"></a>4. Ride-sharing system development</h1><h2 id="4-1-Other-designs"><a href="#4-1-Other-designs" class="headerlink" title="4.1 Other designs"></a>4.1 Other designs</h2><p>In addition, the positioning and arrival time calculation of bus or subway may involve core components such as positioning devices, GPS system, NoSQL database, user TCP connection management system, etc., and provide users with accurate ride information through the processes of real-time data collection, location processing, arrival time calculation and information pushing.</p><p>Meanwhile, auto-payment is also an important function for the convenience of users, which can be realized by integrating with the third-party payment platform.</p><h2 id="4-2-Future-Development"><a href="#4-2-Future-Development" class="headerlink" title="4.2 Future Development"></a>4.2 Future Development</h2><p>The future development of bus&#x2F;subway ridership systems can include the following directions:</p><ul><li>** Intelligent Riding:** Introduction of intelligent devices, such as automatic face recognition of passengers and face deduction.</li><li><strong>Big Data Analytics:</strong> Utilizing big data technology to analyze ridership data and provide better service.</li></ul><p>During the design and development process, it is also important to constantly consider user experience, performance and security to ensure that the system can meet the growing demand.</p><p>Due to space constraints, this concludes the article.</p>]]></content>
    
    
    <summary type="html">All people need is a cell phone and a QR code to do all the things they need to do to commute to work. So how is this convenient bus or subway ride system designed? How does the technology and architecture behind it support your and my daily commute?</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="build" scheme="https://www.nablepart.com/tags/build/"/>
    
    <category term="convenient" scheme="https://www.nablepart.com/tags/convenient/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="phone" scheme="https://www.nablepart.com/tags/phone/"/>
    
    <category term="technology" scheme="https://www.nablepart.com/tags/technology/"/>
    
  </entry>
  
  <entry>
    <title>but distributed locking is surprisingly simple...</title>
    <link href="https://www.nablepart.com/78f70f063eb1/"/>
    <id>https://www.nablepart.com/78f70f063eb1/</id>
    <published>2023-11-05T21:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>As a backend developer, whether at work or in an interview, <strong>Distributed has always been a love&#x2F;hate topic</strong>. It’s like a mysterious labyrinth that sometimes makes you lose your way and sometimes reveals amazing treasures for you.</p><p>Today, let’s talk about a lesser-known but important player in the distributed world that acts as a <strong>guard</strong> for the distributed system, protecting resources from being accessed at will - the distributed lock!</p><p>Imagine if there is no distributed locks, multiple distributed nodes at the same time into a shared resource access, like a group of hungry wolves gathered in front of a piece of meat, who want to take a bite, and finally made the meat lost a full, everyone can not eat.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/7334e872858543a9a73056ce640ec150%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>And with distributed locks, it’s like putting a strong wall around this meat that only one wolf can cross and enjoy the flavor.</p><p>So how exactly does it do it? In this article, Xiao ❤ will bring you together to understand how distributed locks solve concurrency problems in distributed systems.</p><h2 id="What-is-a-distributed-lock"><a href="#What-is-a-distributed-lock" class="headerlink" title="What is a distributed lock?"></a>What is a distributed lock?</h2><p>In a distributed system, a distributed lock is a mechanism for coordinating concurrent access to a shared resource on multiple nodes.</p><p>This shared resource can be a database, file, cache, or any data or resource that requires mutually exclusive access. **Distributed locks ensure that only one node can operate on the resource at any given moment, thus maintaining data consistency and reliability. **</p><h2 id="Why-use-distributed-locks"><a href="#Why-use-distributed-locks" class="headerlink" title="Why use distributed locks?"></a>Why use distributed locks?</h2><h3 id="1-Data-Consistency"><a href="#1-Data-Consistency" class="headerlink" title="1. Data Consistency"></a>1. Data Consistency</h3><p>In a distributed environment, multiple nodes accessing a shared resource at the same time can lead to data inconsistency problems. Distributed locks prevent this from happening and ensure data consistency.</p><h3 id="2-Preventing-contention-conditions"><a href="#2-Preventing-contention-conditions" class="headerlink" title="2. Preventing contention conditions"></a>2. Preventing contention conditions</h3><p>Contention conditions can occur when multiple nodes access a shared resource concurrently, which can lead to unpredictable results. Distributed locks effectively prevent contention conditions, <strong>ensuring that operations are performed in the expected order</strong>.</p><h3 id="3-Limiting-access-to-resources"><a href="#3-Limiting-access-to-resources" class="headerlink" title="3. Limiting access to resources"></a>3. Limiting access to resources</h3><p>Some resources may need to be limited in the number of simultaneous accesses to avoid overloading or wasting resources. Distributed locks can help <strong>control access</strong> to resources.</p><h3 id="Problems-to-be-solved-by-distributed-locks"><a href="#Problems-to-be-solved-by-distributed-locks" class="headerlink" title="Problems to be solved by distributed locks"></a>Problems to be solved by distributed locks</h3><p>The core problem with distributed locks is how to coordinate among multiple nodes to ensure that only one node can acquire the lock while the others must wait.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f575980e0589464c8212613c39a13b99%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>This involves the following key issues:</p><h3 id="1-Mutual-exclusivity"><a href="#1-Mutual-exclusivity" class="headerlink" title="1. Mutual exclusivity"></a>1. Mutual exclusivity</h3><p>Only one node can acquire a lock and the other nodes must wait. This ensures mutually exclusive access to resources.</p><h3 id="2-Re-entry"><a href="#2-Re-entry" class="headerlink" title="2. Re-entry"></a>2. Re-entry</h3><p>This refers to the fact that after an outer function acquires a lock in the same thread, the inner recursive function can still acquire the lock.</p><p>To be clear, it means that when the same thread enters the same code again, it can get the lock again. Its purpose is to: ** prevent deadlocks from occurring due to competing conditions that result from multiple lock acquisitions in the same thread **.</p><h3 id="3-Timeout-release"><a href="#3-Timeout-release" class="headerlink" title="3. Timeout release"></a>3. Timeout release</h3><p>Ensures that even if a node fails in the course of business, the lock will be released overtime, which prevents unnecessary thread waiting and resource wastage as well as deadlocks.</p><h2 id="Distributed-lock-implementation"><a href="#Distributed-lock-implementation" class="headerlink" title="Distributed lock implementation"></a>Distributed lock implementation</h2><p>In a distributed system, there are multiple ways to implement distributed locks, just as there are different varieties of locks, each with its own characteristics.</p><ul><li>There are database-based locks, which are like chefs locking dishes in a cabinet with cutlery that everyone has to queue up to get.</li><li>Then there are ZooKeeper-based locks, which are like a doorman for the entire restaurant, allowing only one person to enter, while everyone else has to wait at the door.</li><li>Finally, there are cache-based locks, which are like a waiter who takes your seat on a first-come, first-served basis with a numbered card.</li></ul><h3 id="1-Database-based-distributed-locks"><a href="#1-Database-based-distributed-locks" class="headerlink" title="1. Database-based distributed locks"></a>1. Database-based distributed locks</h3><p>Use a row in a database table as a lock, and acquire and release the lock through a transaction.</p><p>For example, use <code>MySQL</code> to implement transaction locking. First create a simple table and create a unique index on a field (to ensure that when multiple requests are made to add a new field, only one can succeed).</p><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">CREATE TABLE</span> `<span class="keyword">user</span>` (  </span><br><span class="line">  `id` <span class="type">bigint</span>(<span class="number">20</span>) <span class="keyword">NOT NULL</span> AUTO_INCREMENT,  </span><br><span class="line">  `uname` <span class="type">varchar</span>(<span class="number">255</span>) <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span>,  </span><br><span class="line">  <span class="keyword">PRIMARY KEY</span> (`id`),  </span><br><span class="line">  <span class="keyword">UNIQUE</span> KEY `name` (`uname`) <span class="keyword">USING</span> BTREE</span><br><span class="line">) ENGINE<span class="operator">=</span>InnoDB AUTO_INCREMENT<span class="operator">=</span><span class="number">4</span> <span class="keyword">DEFAULT</span> CHARSET<span class="operator">=</span>utf8mb4</span><br></pre></td></tr></table></figure><p>Execute the following statement when you need to acquire a distributed lock:</p><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">INSERT INTO</span> `<span class="keyword">user</span>` (uname) <span class="keyword">VALUES</span> (<span class="string">&#x27;unique_key&#x27;</span>)</span><br></pre></td></tr></table></figure><p>Because the <code>name</code> field is uniquely indexed, when multiple requests submit <code>insert</code> statements, only one request succeeds.</p><p>The advantage of using <code>MySQL</code> to implement distributed locks is that they are reliable, but performance is poor, and the lock is **non-reentrant, and the same thread cannot acquire the lock **until it is released.</p><h3 id="2-Distributed-locks-based-on-ZooKeeper"><a href="#2-Distributed-locks-based-on-ZooKeeper" class="headerlink" title="2. Distributed locks based on ZooKeeper"></a>2. Distributed locks based on ZooKeeper</h3><p><strong>Zookeeper (zk for short) is an intermediate component providing consistency services for distributed applications</strong> with a hierarchical file system directory tree structure inside.</p><p>zk specifies that there can only be one unique file name under one of its directories. Its distributed locks are implemented as follows:</p><ol><li><strong>Create a lock directory (ZNode)</strong> : First, create a directory in zk dedicated to storing locks, often called the lock root. This directory will contain all requests to acquire locks and the nodes used for lock coordination.</li><li><strong>Acquiring locks</strong> : When a node wants to acquire a lock, it creates an Ephemeral Sequential Node in the locks directory. zk assigns each node a unique sequence number and determines the order in which the locks are acquired based on the size of the sequence number.</li><li><strong>Check if the lock is acquired</strong>: After the node creates the Ephemeral Sequential Node, it needs to check if its node is the node with the smallest sequence number in the lock catalog. If yes, it means that the node has acquired the lock; if not, the node needs to listen to the deletion event of the node with smaller sequence number than it.</li><li><strong>LISTENING FOR LOCK RELEASE</strong>: If a node does not acquire a lock, it will set up a listener to watch for deletion events from nodes with smaller sequence numbers than it. Once the previous node (the node with the smaller sequence number) releases the lock, zk will notify the waiting node.</li><li><strong>Release lock</strong>: When a node finishes operating on a shared resource, it deletes the temporary node it created, which triggers zk to notify the waiting node.</li></ol><p>zk distributed locks provide good consistency and availability, but are more complex to deploy and maintain, requiring careful handling of various boundary cases, such as node creation, deletion, and network partitioning.</p><p>Moreover, the performance of zk distributed locks is not so good, mainly because lock acquisition and release need to be performed on the <code>Leader</code> node of the cluster, which is slow to synchronize.</p><h3 id="3-Cache-based-distributed-locking"><a href="#3-Cache-based-distributed-locking" class="headerlink" title="3. Cache-based distributed locking"></a>3. Cache-based distributed locking</h3><p>Distributed caches, such as Redis or Memcached, are used to store lock information. The cache approach has higher performance, but it needs to deal with the high availability and consistency of distributed caches.</p><p>Next, we discuss in detail how to design a highly available distributed lock in Redis and several problems that may be encountered, including:</p><ol><li>deadlock problems</li><li>locks released prematurely</li><li>locks mistakenly deleted by other threads</li><li>high availability issues</li></ol><h4 id="1-Deadlock-problems"><a href="#1-Deadlock-problems" class="headerlink" title="1) Deadlock problems"></a>1) Deadlock problems</h4><p>Earlier versions of <code>redis</code> did not have the <code>setnx</code> command to set a timeout parameter when writing a key, so you need to use the <code>expire</code> command to set the expiration time of the lock individually, which may lead to deadlock problems.</p><p>For example, setting the expiration time for a lock fails to execute, causing all subsequent lock grabs to fail.</p><h4 id="Lua-script-or-SETNX"><a href="#Lua-script-or-SETNX" class="headerlink" title="Lua script or SETNX"></a>Lua script or SETNX</h4><p>To ensure atomicity, we can use a Lua script that ensures atomicity for both <code>SETNX + EXPIRE</code> directives, and we can also make clever use of <code>Redis</code>‘s <code>SET</code> directive extension parameter: <code>SET key value [EX seconds][PX milliseconds][NX|XX]</code>, which is also atomic.</p><blockquote><p>SET key value [EX seconds] [PX milliseconds] [NX|XX]</p></blockquote><ul><li>NX: indicates that the <code>set</code> can only succeed if the <code>key</code> does not exist, i.e., it guarantees that only the first client request can acquire the lock, while other client requests can only wait for the lock to be released.</li><li>EX seconds :Set the expiration time of <code>key</code>, default unit is seconds.</li><li>PX milliseconds: set the expiration time of <code>key</code>, default unit is milliseconds.</li><li>XX: sets the value of <code>key</code> only if it exists.</li></ul><p>In Go, the key code looks like this:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">getLock</span><span class="params">()</span></span> &#123;    </span><br><span class="line">   methodName := <span class="string">&quot;getLock&quot;</span>    </span><br><span class="line">   val, err := client.Do(<span class="string">&quot;set&quot;</span>, methodName, <span class="string">&quot;lock_value&quot;</span>, <span class="string">&quot;nx&quot;</span>, <span class="string">&quot;ex&quot;</span>, <span class="number">100</span>) </span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;        </span><br><span class="line">       zaplog.Errorf(<span class="string">&quot;%s set redis lock failed, %s&quot;</span>, methodName, err)</span><br><span class="line">       <span class="keyword">return</span></span><br><span class="line">  &#125;    </span><br><span class="line">   <span class="keyword">if</span> val == <span class="literal">nil</span> &#123; </span><br><span class="line">       zaplog.Errorf(<span class="string">&quot;%s get redis lock failed&quot;</span>, methodName)        </span><br><span class="line">       <span class="keyword">return</span> </span><br><span class="line">  &#125;</span><br><span class="line">   ... </span><br><span class="line">   client.Del(lock.key()).Err() </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-Lock-Early-Release"><a href="#2-Lock-Early-Release" class="headerlink" title="2) Lock Early Release"></a>2) Lock Early Release</h4><p>The above scheme solves the atomicity problem of lock expiration and does not generate deadlock, but there may still be the problem of lock early release.</p><p>As shown in the figure, suppose we set the lock expiration time to 5 seconds, and the business execution takes 10 seconds.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/04d400ea538d4ddf90e8e788623a8307%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>While thread 1 is executing its business, its lock is released after its expiration date, at which point thread 2 is able to get the lock and also starts accessing public resources.</p><p>Obviously, this situation leads to <strong>public resources are not strictly serialized access, destroying the mutual exclusivity of distributed locks</strong>.</p><p>At this point, some of you may think that since the locking time is too short, we can just set the lock expiration time to be longer.</p><p>In fact, not, first of all, we can not know in advance the exact time of a business execution. Second, the access time of public resources is likely to change dynamically, so it’s not good to set the time too long.</p><h4 id="Redisson-Framework"><a href="#Redisson-Framework" class="headerlink" title="Redisson Framework"></a>Redisson Framework</h4><p>So, we might as well give the locking thread an auto-renewal feature, i.e. <strong>check if the lock still exists every once in a while, and if it does, extend the lock to prevent it from expiring and being released early</strong>.</p><p>This feature requires the use of daemon threads, the current has an open source framework to help us out, it is - Redisson, which is implemented as shown in the figure:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0870cb3dbd6048339335dc980cbf81ef%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>When thread 1 succeeds in adding a lock, it starts a <code>Watch dog</code> watchdog, which is a background thread that checks every 1 second (configurable) if the business still holds the lock, to achieve the effect that the lock is not actively released by the thread and is automatically renewed.</p><h4 id="3-Locks-mistakenly-released-by-other-threads"><a href="#3-Locks-mistakenly-released-by-other-threads" class="headerlink" title="3) Locks mistakenly released by other threads"></a>3) Locks mistakenly released by other threads</h4><p>In addition to locks being released early, we may also encounter the problem of locks being mistakenly deleted by other threads.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/85169e013a2b49c4b985702897eb76a0%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As shown in the figure, the locking thread 1 finishes executing its business and goes to release the lock. But thread 1 has already released its own lock, and the distributed lock is held by thread 2, so it will delete thread 2’s lock by mistake, but thread 2’s business may not be finished, resulting in an exception.</p><h4 id="Unique-Value"><a href="#Unique-Value" class="headerlink" title="Unique Value"></a>Unique Value</h4><p>To solve the problem of accidental lock deletion, we need to add a unique identifier to each thread’s lock.</p><p>For example, when adding a lock, set the <code>Value</code> to the IP of the thread’s corresponding server. The corresponding Go key code looks like this:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> (  </span><br><span class="line"></span><br><span class="line">   HostIP = getLocalIP()</span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">getLock</span><span class="params">()</span></span> &#123;    </span><br><span class="line">   methodName := <span class="string">&quot;getLock&quot;</span>    </span><br><span class="line">   val, err := client.Do(<span class="string">&quot;set&quot;</span>, methodName, HostIP, <span class="string">&quot;nx&quot;</span>, <span class="string">&quot;ex&quot;</span>, <span class="number">100</span>) </span><br><span class="line">   <span class="keyword">if</span> err != <span class="literal">nil</span> &#123;        </span><br><span class="line">       zaplog.Errorf(<span class="string">&quot;%s redis error, %s&quot;</span>, methodName, err)</span><br><span class="line">       <span class="keyword">return</span></span><br><span class="line">  &#125;    </span><br><span class="line">   <span class="keyword">if</span> val == <span class="literal">nil</span> &#123; </span><br><span class="line">       zaplog.Errorf(<span class="string">&quot;%s get redis lock error&quot;</span>, methodName)        </span><br><span class="line">       <span class="keyword">return</span> </span><br><span class="line">  &#125;</span><br><span class="line">   ... </span><br><span class="line">   <span class="keyword">if</span> client.Get(methodName) == HostIP &#123;</span><br><span class="line"></span><br><span class="line">       client.Del(lock.key()).Err()</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>This way, the problem of accidentally removing locks from other threads can be avoided by determining whether <code>Value</code> is the IP of the current instance when removing the lock.</p><p>To ensure strict atomicity, the above code can be replaced with a <code>Lua</code> script, as shown below:</p><figure class="highlight vbnet"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> redis.<span class="keyword">call</span>(</span><br><span class="line">  <span class="keyword">return</span> redis.<span class="keyword">call</span>(</span><br><span class="line"><span class="keyword">else</span></span><br><span class="line">  <span class="keyword">return</span> <span class="number">0</span></span><br><span class="line"><span class="keyword">end</span>;</span><br></pre></td></tr></table></figure><h4 id="4-Redlock-highly-available-locks"><a href="#4-Redlock-highly-available-locks" class="headerlink" title="4) Redlock highly available locks"></a>4) Redlock highly available locks</h4><p>The previous several programs are based on the stand-alone version of the consideration, and the actual business of Redis are generally cluster deployment, so we next discuss the Redis distributed locks of highly available problems.</p><p>Imagine if thread 1 has a lock on the <code>master</code> master node of Redis, but it has not been synchronized to the <code>slave</code> slave node.</p><p>At this point, if the master node fails, the slave node is upgraded to the master node, and other threads can re-acquire the lock, ** at this point, there may be more than one thread to get the same lock. I.e., the distributed locks’ mutual exclusivity is broken. **</p><p>In order to solve this problem, the authors of Redis proposed a special algorithm to support distributed locks: Redis Distributed Lock, referred to as Redlock, whose core idea is similar to the election mechanism of the registry.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/3500911cb004459e9f449e729dfb4e3b%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>A Redis cluster deploys multiple <code>master</code> nodes, which are independent of each other, i.e., there is no data synchronization between each master node.</p><p>The number of nodes is odd, so every time a client grabs a lock, it needs to apply for the lock from these <code>master</code> nodes, and the lock will be successfully acquired only when it is acquired from more than half of the nodes.</p><h2 id="Advantages-disadvantages-and-common-implementations"><a href="#Advantages-disadvantages-and-common-implementations" class="headerlink" title="Advantages, disadvantages, and common implementations"></a>Advantages, disadvantages, and common implementations</h2><p>The above are the three commonly used distributed locking implementations in the industry, and their respective advantages and disadvantages are as follows:</p><ul><li><strong>Database-based distributed lock</strong>: high reliability, but poor performance, not suitable for high concurrency scenarios.</li><li>** Distributed lock based on ZooKeeper **: provides good consistency and availability, suitable for complex distributed scenarios, but deployment and maintenance is complex, and the performance is not as good as the cache approach.</li><li><strong>Cache-based distributed locks</strong>: higher performance, suitable for most scenarios, but need to deal with the high availability of the cache.</li></ul><p>Among them, the commonly used distributed locking implementations in the industry are usually cache-based approaches, such as <code>Distributed Locking with Redis</code>. This is because Redis has excellent performance and can fulfill the needs of most application scenarios.</p><h2 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h2><p>Despite the twists and turns of the distributed world, with distributed locks, we are like the audience at a movie, where we can enter in an organized fashion, and the resources in the distributed system are like films waiting to be viewed one by one.</p><p>That’s the beauty of distributed! It may be loved and hated, but **it’s the diverse complexity of the tech world that makes our technological journey so much more exciting. **</p>]]></content>
    
    
    <summary type="html">Imagine if there is no distributed locks, multiple distributed nodes at the same time into a shared resource access, like a group of hungry wolves gathered in front of a piece of meat, who want to take a bite, and finally get the meat lost all the way to the end, everyone can not eat.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Distributed" scheme="https://www.nablepart.com/tags/Distributed/"/>
    
    <category term="Interviews" scheme="https://www.nablepart.com/tags/Interviews/"/>
    
    <category term="resource" scheme="https://www.nablepart.com/tags/resource/"/>
    
    <category term="hungry" scheme="https://www.nablepart.com/tags/hungry/"/>
    
  </entry>
  
  <entry>
    <title>An article to introduce you to memory management</title>
    <link href="https://www.nablepart.com/b4f922f893f2/"/>
    <id>https://www.nablepart.com/b4f922f893f2/</id>
    <published>2023-11-05T19:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>Catalog</p><blockquote></blockquote><ol><li>Introduction</li><li>Virtual Memory</li><li>Memory Management</li><li>Escape Analysis</li><li>Summary</li></ol><h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>Memory management is a topic that developers can’t avoid in the process of writing and tuning programs, and it’s also a computer knowledge that must be understood by senior programmers.</p><p>Experienced interviewers will look at a candidate’s skill level in terms of how well he or she masters memory management. The knowledge involved may include operating systems, principles of computer composition, and the underlying implementation of programming languages.</p><p>When it comes to memory, which is actually storage, we can look at von. Neumann’s computer structure to understand the concept of memory:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/af282acb032a448db07e7b78ca08b048%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>As you can see, memory is an integral part of a computer. Memory management, in fact, is the management of memory storage space.</p><p>Next, we will learn about memory management in terms of memory classification, and memory space allocation in the Go language, combined with common escape analysis scenarios.</p><h2 id="2-Virtual-Memory"><a href="#2-Virtual-Memory" class="headerlink" title="2. Virtual Memory"></a>2. Virtual Memory</h2><h2 id="2-1-The-Difference-Between-Virtual-Memory-and-Physical-Memory"><a href="#2-1-The-Difference-Between-Virtual-Memory-and-Physical-Memory" class="headerlink" title="2.1 The Difference Between Virtual Memory and Physical Memory"></a>2.1 The Difference Between Virtual Memory and Physical Memory</h2><p>As we all know, computer memory was very small in the past, and we had a very limited scope of physical addressing when running computer programs.</p><p>For example, on a 32-bit machine, the addressing range is only 2 to the 32nd power, that is, 4G, and this is fixed for the program, so we can imagine that if every computer process is allocated 4G of physical memory, it will consume too much resources.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/c957903820544270a9faa96e14005d9d%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Resource utilization is also a huge problem. Processes that are not allocated a resource have to wait for it, and when a process is finished, the waiting process is loaded into memory, and this frequent loading is also very inefficient.</p><p>Moreover, since the instructions are accessible to the physical memory, any process can modify the data of other processes in the memory or even modify the data in the kernel address space, which is very unsafe.</p><p>Due to the high resource consumption, low utilization and insecurity when physical memory is used. Therefore, <strong>Virtual Memory</strong> was introduced.</p><p>Virtual memory is a technique for memory management in computer systems, ** by allocating virtual logical memory addresses so that each application program thinks it has continuously available memory space. ** In reality, these memory spaces are usually multiple fragments of physical memory that are partitioned, with some temporarily stored on external disk storage for data exchange when needed.</p><h3 id="2-2-Virtual-Memory-Conversion"><a href="#2-2-Virtual-Memory-Conversion" class="headerlink" title="2.2 Virtual Memory Conversion"></a>2.2 Virtual Memory Conversion</h3><p>Since computers use virtual memory, how do we get the real physical memory addresses? The answer is memory mapping, i.e. how to convert virtual addresses (also known as logical addresses) into physical addresses.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/a9ad96e8700243fc934533467d40a769%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Under the Linux operating system, memory is first managed in two ways, page storage management and segmented storage management, where:</p><ul><li>Page-based storage effectively addresses memory fragmentation and improves memory utilization;</li><li>Segmented storage management reflects the logical structure of the program and facilitates segment sharing;</li></ul><p>In layman’s terms there are two units of memory, one is paging and the other is segmentation. ** Paging is the process of cutting up the entire virtual and physical memory space into many fixed-size chunks ** with mapping between virtual and physical addresses through ** page tables **:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/6938f7fab54747baa593d3c773245001%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Paged memory is pre-divided, so it doesn’t create memory fragments with very small gaps, and is more efficiently utilized when allocated.</p><p>This is not the case with segmentation, which is based on the logic of the program, and since program attributes can vary greatly, the size of the segments can also vary. In segment management, virtual and physical addresses are mapped through a <strong>segment table</strong>:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/1f2ca2495b474ece81bc940e3d70e3b4%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>It is easy to see that the slice of segmented memory management is not uniform, but is allocated according to the memory occupied by different programs. The problem brought about by this is that, assuming that after the memory of program 1 (1G) is used up and released, another program 4 (assuming that the memory needs to be 1000M) is loaded into the physical memory and may still have 24M of memory left, if there are a large number of such memory fragments in the system, then the overall memory utilization will be very low.</p><p>Thus, the segment-and-page memory management approach emerged, which combines the above two methods of memory management: i.e., <strong>first divide the user program into a number of segments, assign a segment name to each segment, and then divide each segment into a number of pages</strong>.</p><p>In a segment-and-page system, in order to realize the translation from logical address to physical address, the system needs to be configured with both segment and page tables, and the segment and page tables are utilized for mapping from user address to physical memory space. The system creates a segment table for each process and a page table on each segment. The segment table contains the segment number, page table length, and page table start address, and the page table contains the page number and block number.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/42af202fc5e1400593debfb8a0f96c0c%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>During address translation, the segment table is used to find the page table address, then the page table is used to get the page frame number, and finally the physical address.</p><p>The mapping of virtual memory to physical memory is managed at the operating system level. When we are developing, the memory management involved is often only the work that the software program has to do when it calls the virtual memory:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/2c25f47d44174ab69b75120cb4fd50c1%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Next, we analyze memory management in software development in terms of what constitutes virtual memory.</p><h2 id="3-Memory-Management"><a href="#3-Memory-Management" class="headerlink" title="3. Memory Management"></a>3. Memory Management</h2><p>The program is divided into five parts on the virtual memory: stack area, heap area, data area, global data area, and code segment.</p><p>Memory management, on the other hand, is the rationalization of the use of memory space, mainly the allocation and use of the two important areas of the heap (Heap) and stack (Stack).</p><h3 id="3-1-Heap-and-Stack"><a href="#3-1-Heap-and-Stack" class="headerlink" title="3.1 Heap and Stack"></a>3.1 Heap and Stack</h3><p>There are two important address spaces in virtual memory, the heap and the stack. For underlying programming languages such as C++, the memory space on the stack is managed by the compiler, while the memory space on the heap needs to be managed manually by the programmer for allocation and reclamation.</p><p>In <strong>Go, the memory space on the stack is also managed by the compiler, while the memory space on the heap is managed by both the compiler and the garbage collector for allocation and reclamation</strong>, which brings great convenience to us programmers.</p><p>The overhead of allocating and reclaiming memory on the stack is very low, requiring only 2 instructions: PUSH and POP. PUSH presses the data into the stack and POP frees the space, consuming only the time it takes to copy the data into memory.</p><p>When allocating memory on the heap, it is not only slower to allocate, but also more difficult to garbage collect. For example, Go has used the three-color marking method + mixed write barrier technique to do garbage collection since 1.8. Overall, heap memory allocation leads to much higher overhead than stack memory allocation.</p><blockquote><p>To learn more about Go garbage collection, check out this article: [Go Language Garbage Collection](<a href="https://link.juejin.cn/?target=http://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==%25">https://link.juejin.cn?target=http%3A%2F%2Fmp.weixin.qq.com%2Fs%3F__biz%3DMzI5Nzk2MDgwNg%3D%3D%</a> 26mid%3D2247484217%26idx%3D1%26sn%3Dd464e95dcec6252b5e00a5f634a46892%26chksm%26 3Decac5730dbdbde26005ba864ab324a3a2c9b90f42c5ce037eb4f5eee36a4e55a4385421d9dd3%26scene%3D21%23wechat_redirect “<a href="http://mp.weixin.qq/">http://mp.weixin.qq</a>. com&#x2F;s?__biz&#x3D;MzI5Nzk2MDgwNg&#x3D;&#x3D;&amp;mid&#x3D;2247484217&amp;idx&#x3D;1&amp;sn&#x3D;d464e95dcec6252b5e00a5f634a46892&amp;chksm&#x3D; ecac5730dbdbde26005ba864ab324a3a2c9b90f42c5ce037eb4f5eee36a4e55a4385421d9dd3&amp;scene&#x3D;21#wechat_redirect”)</p></blockquote><h3 id="3-2-Stack-memory-allocation"><a href="#3-2-Stack-memory-allocation" class="headerlink" title="3.2 Stack memory allocation"></a>3.2 Stack memory allocation</h3><h4 id="1-Memory-allocation-challenges"><a href="#1-Memory-allocation-challenges" class="headerlink" title="1) Memory allocation challenges"></a>1) Memory allocation challenges</h4><ul><li>Memory space is requested by user programs like C&#x2F;C++, which may make frequent memory requests and reclaims, but every time a memory allocation requires a system call (i.e., memory can only be requested by entering the kernel state), it results in low performance.</li><li>In addition to this, there may be multiple threads (there are also co-threads in Go) accessing the same address space, which will definitely require locking the memory, and will bring more overhead.</li><li>Initially, the heap memory is a contiguous block of memory, but as the system continues to request and reclaim memory, many memory fragments may be created, resulting in a less efficient use of memory.</li></ul><p>In order to cope with the above three most common problems when a program performs memory allocation, Go language has made some improvements by combining Google’s TCMalloc (ThreadCacheMalloc) memory reclamation method. At the same time, TCMalloc and Go memory allocation will introduce ** thread cache (mcentral of P), central cache (mcentral) and page heap (mheap) ** three components for hierarchical management of memory. This is shown in the figure:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/1f325f633ae545b9891820aebc2e1c41%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Thread cache belongs to each individual thread or co-thread, which stores the memory block span used by each thread, and since the size of the memory block varies, there are hundreds of memory block class span classes, which manage different sizes of memory space (e.g., 8KB, 16KB, 32KB, …). The Since there is no multi-threading involved, there is no need to use mutual exclusion locks to protect the memory to minimize the performance loss caused by lock contention.</p><p>When the space of the thread cache is not enough, the center cache is used as the allocation of memory for small objects. The center cache corresponds to each span class of the thread cache one by one, and there are two memory blocks in each span class of the center cache, storing the allocated memory space and the full memory space respectively, in order to improve the efficiency of memory allocation. If the center cache is not satisfied, it is like page heap for space request.</p><p>In order to improve the utilization of space, when encountered in the medium-large object (&gt; &#x3D; 32KB) allocation, the memory allocator will choose the page heap directly for allocation.</p><p>The core of Go language memory allocation is the use of <strong>multilevel caching</strong> to categorize objects based on size and implement different allocation policies according to the category. As shown in the above figure, the application will request memory space from different components according to the size of the object (Tiny or Large and medium).</p><h4 id="2-Stack-Memory-Allocation"><a href="#2-Stack-Memory-Allocation" class="headerlink" title="2) Stack Memory Allocation"></a>2) Stack Memory Allocation</h4><p>The memory in the stack area is usually allocated and released automatically by the compiler. Generally speaking, the stack area stores function inputs and local variables, which will be created with the creation of the function, and perish with the return of the function, and generally will not exist in the program for a long time.</p><p>This linear memory allocation strategy is extremely efficient, but engineers often have no control over the allocation of stack memory, which is basically done by the compiler.</p><p>The stack space contains two important global variables at runtime, <code>runtime.stackpool</code> and <code>runtime.stackLarge</code>, which represent the global stack cache, which allocates less than 32KB of memory, and the large stack cache, which allocates more than 32KB of stack space:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f11965a043f8481ba6696c74f58d090b%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>When allocating stack space, depending on the size of the thread cache and the request stack, the Go language allocates stack space in three different ways:</p><ol><li>if the stack space is small, allocate memory using the global stack cache or a fixed-size free chain table on the thread cache;</li><li>if the stack is large, it is allocated from the global <code>runtime.stackLarge</code> stack cache;</li><li>if the stack is large and <code>runtime.stackLarge</code> is insufficient, request a sufficiently sized piece of memory on the heap.</li></ol><p>Since Go 1.4, the minimum stack memory size is 2KB, which is the size of a goroutine program. So when the number of goroutines in a program exceeds the maximum amount of stack memory that can be allocated, it will be allocated on the heap. In other words, although Go can allocate an unlimited number of goroutines with the go keyword, performance-wise it is better not to allocate more goroutines than the maximum amount of stack space.</p><p>Assuming that the maximum stack memory is 8MB, it is best to allocate no more than 4000 goroutines (8MB&#x2F;2KB).</p><h2 id="4-Escape-analysis"><a href="#4-Escape-analysis" class="headerlink" title="4. Escape analysis"></a>4. Escape analysis</h2><h2 id="4-1-How-Go-does-escape-analysis"><a href="#4-1-How-Go-does-escape-analysis" class="headerlink" title="4.1 How Go does escape analysis"></a>4.1 How Go does escape analysis</h2><p>In programming languages like C and C++, which require manual memory management, the allocation of objects or structures to the stack or heap is up to the engineers, which poses a challenge: how to accurately allocate a reasonable amount of space for each variable, so as to improve the efficiency of the whole program and the efficiency of memory usage. However, this manual allocation of memory in C and C++ leads to the following two problems:</p><ol><li>Objects that don’t need to be allocated on the heap are allocated on the heap - wasting memory space;</li><li>objects that need to be allocated on the heap are allocated on the stack - creating wild pointers and compromising memory safety;</li></ol><p>Compared to wild pointers, wasted memory space is a minor issue. In C, it is a common error for a variable on the stack to be returned to the caller as a return value by a function, in the code shown below, the variable <code>i</code> on the stack is returned incorrectly:</p><p>&#96;&#96; arduino<br>int *dangling_pointer() {<br>   int i &#x3D; 2;<br>   return &amp;i.<br>}</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">When the `dangling_pointer` function returns, its local variables are reclaimed by the compiler (the mechanism of space on the stack), and the caller acquires dangerous wild pointers. If there are a lot of illegal pointer values inside the program, it is more difficult to find and locate them in large projects.</span><br><span class="line"></span><br><span class="line">&gt; When an object is freed or reclaimed, but no modifications are made to the pointer so that it still points to a reclaimed memory address, the pointer is called a wild pointer, or a dangling pointer, or a lost pointer. --wikipedia</span><br><span class="line"></span><br><span class="line">So, in Go, how does the compiler know whether a variable needs to be allocated on the heap or the stack to avoid this problem?</span><br><span class="line"></span><br><span class="line">**The way the compiler decides where to allocate memory is called escape analysis**. Escape analysis is done by the compiler and works during the compilation phase. In compiler optimization, escape analysis is the method used to determine the dynamic scope of pointers.The compiler of the Go language uses escape analysis to determine which variables should be allocated on the stack and which should be allocated on the heap.</span><br><span class="line"></span><br><span class="line">This includes memory that is implicitly allocated using methods such as `new`, `make`, and literals. The Go language&#x27;s escape analysis follows the following two invariants:</span><br><span class="line"></span><br><span class="line">1. pointers to stack objects cannot exist in the heap;</span><br><span class="line">2. a pointer to a stack object cannot survive a stack object reclamation.</span><br><span class="line"></span><br><span class="line">What does this mean? Let&#x27;s translate:</span><br><span class="line"></span><br><span class="line">* First, if a pointer to the heap points to a stack object, then the stack object&#x27;s memory needs to be allocated to the heap;</span><br><span class="line">* If the pointer survives after the stack object is reclaimed, then the object can only be allocated to the heap.</span><br><span class="line"></span><br><span class="line">When we perform memory allocation, the compiler will follow the above two principles to allocate memory to either the stack or the heap for the variables or objects we request.</span><br><span class="line"></span><br><span class="line">In other words, when we allocate memory in a way that violates one of these two principles, a variable that was intended for the stack may &quot;escape&quot; to the heap, which is called a memory escape. A program with a large number of memory escapes is bound to have unintended negative consequences, such as slow garbage collection and memory overflows.</span><br><span class="line"></span><br><span class="line">### 4.2 Four Escape Scenarios</span><br><span class="line"></span><br><span class="line">In Go, memory escapes on the stack can occur due to the following four scenarios.</span><br><span class="line"></span><br><span class="line">**1. Pointer escapes</span><br><span class="line"></span><br><span class="line">Pointer escape is easy to understand, when we create an object in a function, the life cycle of the object ends with the end of the function, and that&#x27;s when the object&#x27;s memory is allocated on the stack.</span><br><span class="line"></span><br><span class="line">And if a pointer to an object is returned, in this case, the function exits, but the pointer is still there, the object&#x27;s memory can not be reclaimed with the end of the function, so it can only be allocated on the heap.</span><br></pre></td></tr></table></figure><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="keyword">type</span> User <span class="keyword">struct</span> &#123;</span><br><span class="line">    ID     <span class="type">int64</span></span><br><span class="line">    Name   <span class="type">string</span></span><br><span class="line">    Avatar <span class="type">string</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">GetUserInfo</span><span class="params">()</span></span> *User &#123;</span><br><span class="line">    <span class="keyword">return</span> &amp;User&#123;</span><br><span class="line">        ID: <span class="number">666666</span>,</span><br><span class="line">        Name: <span class="string">&quot;sim lou&quot;</span>,</span><br><span class="line">        Avatar: <span class="string">&quot;https://www.baidu.com/avatar/666666&quot;</span>,</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    u := GetUserInfo()</span><br><span class="line">    <span class="built_in">println</span>(u.Name)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>In the above example, if the <code>User</code> object had been returned instead of the object pointer <code>*User</code>, it would have been a local variable and would have been allocated on the stack; conversely, the pointer, being a reference, would have continued to be used in the main function, and thus memory would have been allocated only to the heap.</p><p>We can look at variable escapes with the compiler command <code>go build -gcflags -m main.go</code>:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/b93cc2d097d54f8e80c89cde05d00e0a%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p><code>&amp;User&#123;...&#125; escapes to heap</code> That means the object escaped to the heap.</p><p>**interface{} Dynamic type escaping</p><p>In Go, a null interface, <code>interface&#123;&#125;</code>, can represent an arbitrary type. If a function argument is an interface{}, it is difficult to determine the exact type of the argument during compilation, and it will escape. For example, the Println function has an interface{} null input:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Println</span><span class="params">(a . .<span class="keyword">interface</span>&#123;&#125;)</span></span> (n <span class="type">int</span>, err <span class="type">error</span>)</span><br></pre></td></tr></table></figure><p>This returns a User object, which also escapes, but the escape node is when the fmt.Println function is used:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">GetUserInfo</span><span class="params">()</span></span> User &#123;</span><br><span class="line">    <span class="keyword">return</span> User&#123;</span><br><span class="line">        ID: <span class="number">666666</span>,</span><br><span class="line">        Name: <span class="string">&quot;sim lou&quot;</span>,</span><br><span class="line">        Avatar: <span class="string">&quot;https://www.baidu.com/avatar/666666&quot;</span>,</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    u := GetUserInfo()</span><br><span class="line">    fmt.Println(u.Name) </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>**3. Insufficient stack space</p><p>The operating system has a size limit on the amount of stack space used by kernel threads, usually 8 MB on 64-bit Linux systems.You can use the ulimit -a command to see how much memory the stack is allowed to take up on your machine.</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">root<span class="keyword">@cvm_172_16_10_34</span>:~ # ulimit -a</span><br><span class="line"><span class="attribute">-s</span>: stack size (kbytes)             <span class="number">8192</span></span><br><span class="line"><span class="attribute">-u</span>: processes                       <span class="number">655360</span></span><br><span class="line"><span class="attribute">-n</span>: file descriptors                <span class="number">655360</span></span><br></pre></td></tr></table></figure><p>Because the stack space is usually small, improper implementation of recursive functions can lead to stack overflow.</p><p>For Go, the runtime tries to dynamically allocate stack space when the goroutine needs it, and the initial stack size of a goroutine is 2 KB. When the goroutine is dispatched, it binds to the kernel thread for execution, and the size of the stack does not exceed the operating system’s limit.</p><p>For the Go compiler, local variables that exceed a certain size will escape to the heap, and the size limit may be different for different Go versions. Let’s do an experiment (note that when allocating int[], int takes up 8 bytes, so 8192 ints is 64 KB):</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> <span class="string">&quot;math/rand&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">generate8191</span><span class="params">()</span></span> &#123;</span><br><span class="line">    nums := <span class="built_in">make</span>([]<span class="type">int</span>, <span class="number">8192</span>) </span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i <span class="number">8192</span>; i++ &#123;</span><br><span class="line">        nums[i] = rand.Int()</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">generate8192</span><span class="params">()</span></span> &#123;</span><br><span class="line">    nums := <span class="built_in">make</span>([]<span class="type">int</span>, <span class="number">8193</span>) </span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i <span class="number">8193</span>; i++ &#123;</span><br><span class="line">        nums[i] = rand.Int()</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">generate</span><span class="params">(n <span class="type">int</span>)</span></span> &#123;</span><br><span class="line">    nums := <span class="built_in">make</span>([]<span class="type">int</span>, n) </span><br><span class="line">    <span class="keyword">for</span> i := <span class="number">0</span>; i <span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    generate8191()</span><br><span class="line">    generate8192()</span><br><span class="line">    generate(<span class="number">1</span>)</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The compilation results are as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/e6d2d75636734c378b7a770d1b6492d4%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>It can be noticed that <code>make([]int, 8192)</code> does not escape, and <code>make([]int, 8193)</code> and <code>make([]int, n)</code> escape to the heap. In other words, when the memory occupied by a slice exceeds a certain size, or the current slice length cannot be determined, the memory occupied by the object will be allocated on the heap.</p><p><strong>4. closures</strong></p><blockquote><p>The combination of a function and a reference to its surrounding state (lexical environment) bound together (or the function surrounded by references) is a closure. That is, a closure lets you access the scope of an inner function within the scope of its outer function.</p><ul><li>Closures</li></ul></blockquote><p>Memory escapes also occur in the Go language when using closure functions. Look at a sample code:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Increase</span><span class="params">()</span></span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> <span class="type">int</span> &#123;</span><br><span class="line">    n := <span class="number">0</span></span><br><span class="line">    <span class="keyword">return</span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> <span class="type">int</span> &#123;</span><br><span class="line">       n++</span><br><span class="line">       <span class="keyword">return</span> n</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">    in := Increase()</span><br><span class="line">    fmt.Println(in()) </span><br><span class="line">    fmt.Println(in()) </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><code>Increase()</code> The return value of the function is a closure that accesses the external variable n, which will remain until in is destroyed. Obviously, the memory occupied by the variable n cannot be reclaimed with the exit of Increase(), so it will escape to the heap.</p><h3 id="4-3-Performance-Improvement-with-Escape-Analysis"><a href="#4-3-Performance-Improvement-with-Escape-Analysis" class="headerlink" title="4.3 Performance Improvement with Escape Analysis"></a>4.3 Performance Improvement with Escape Analysis</h3><p>**Passing a value vs. passing a pointer</p><p>Passing a value copies the entire object, while passing a pointer only copies the pointer address, which points to the same object. Passing pointers reduces the copying of values, but causes memory allocations to escape to the heap, increasing the burden on garbage collection (GC). In scenarios where objects are created and deleted frequently, the GC overhead caused by passing pointers can have a serious impact on performance.</p><p>In general, pass pointers for structures that require modification of the original object’s value, or occupy a relatively large amount of memory. For a read-only structure with a small memory footprint, passing a value directly can achieve better performance.</p><h2 id="5-Summary"><a href="#5-Summary" class="headerlink" title="5. Summary"></a>5. Summary</h2><p>Memory allocation is the core logic of runtime memory management. The Go program’s runtime memory allocator uses an allocation strategy similar to <code>TCMalloc</code> to classify objects according to their size, and designs a multi-layer cache component to improve the performance of the memory allocator. Understanding the design and implementation principles of the Go memory allocator can help us understand the different choices made by different programming languages when designing memory allocators.</p><p>Stack memory is an important memory space in an application, which can support local local variables and function calls. Variables in the stack space are created and destroyed along with the stack, and this part of the memory space doesn’t require too much intervention and management by engineers. Modern programming languages reduce our workload through escape analysis, and understanding the allocation of the stack space can be of great help in understanding the runtime of the Go language.</p>]]></content>
    
    
    <summary type="html">Memory management, an inescapable topic for developers in the process of program writing and tuning, is also a must-know computer knowledge towards senior programmers. Experienced interviewers will look at a candidate&#39;s skill level in terms of their mastery of memory management.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Interview" scheme="https://www.nablepart.com/tags/Interview/"/>
    
    <category term="candidate" scheme="https://www.nablepart.com/tags/candidate/"/>
    
    <category term="knowledge" scheme="https://www.nablepart.com/tags/knowledge/"/>
    
    <category term="Computer Composition Principles" scheme="https://www.nablepart.com/tags/Computer-Composition-Principles/"/>
    
    <category term="Memory management" scheme="https://www.nablepart.com/tags/Memory-management/"/>
    
  </entry>
  
  <entry>
    <title>A Deep Dive into RabbitMQ Sequential Consumption, Dead Letter Queues</title>
    <link href="https://www.nablepart.com/615d1b75b010/"/>
    <id>https://www.nablepart.com/615d1b75b010/</id>
    <published>2023-11-05T18:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.786Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>Previous article (<a href="http://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&mid=2247485140&idx=1&sn=">A great tool for dealing with traffic spikes - messaging middleware</a> 62aa2a762363cc8c704fc427ef50a2b6&amp;chksm&#x3D;ecac52dddbdbdbcb3f2aa1e0178e17acfa0357b3f945e15caf92f62a3cfbce4c28b3a143250c&amp;scene&#x3D;21# wechat_redirect)), we have introduced the use of message middleware, mainly used as: decoupling, peak shaving, asynchronous communication, application decoupling, and introduces the industry’s commonly used several kinds of message middleware, advantages and disadvantages of the comparison and the use of scenarios.</p><p>In today’s article, let’s talk about <code>RabbitMQ</code>, which is the earliest messaging middleware used in the work of small ❤, mainly used for asynchronous consumption of large amounts of data.</p><h2 id="2-RabbitMQ"><a href="#2-RabbitMQ" class="headerlink" title="2. RabbitMQ"></a>2. RabbitMQ</h2><h2 id="2-1-Core-Components"><a href="#2-1-Core-Components" class="headerlink" title="2.1 Core Components"></a>2.1 Core Components</h2><p>RabbitMQ is an open source messaging middleware that implements the Advanced Message Queuing Protocol (AMQP) and provides a variety of important components to support the production, transport, and consumption of messages.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/ba450ee6b26c44ae9c10e5030bfd0ee7%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><ol><li><strong>Producer:</strong> The producer is the sender of the message and is responsible for publishing the message to the RabbitMQ server. The message can contain any content, such as tasks, logs, notifications, etc. 2.</li><li><strong>Channel:</strong> The channel used for pushing and receiving messages. 3.</li><li><strong>Exchange:</strong> A switch is a relay station for messages, it receives messages from producers and routes them to one or more queues. Different types of switches, such as <code>fanout&amp;#xFF0C;direct&amp;#xFF0C;topic&amp;#xFF0C;headers</code>, support different routing rules.</li><li><strong>Queue (queue):</strong> Queue is a buffer for messages, messages are stored in the queue before being sent to the consumer, the consumer gets the message from the queue and processes it.</li><li><strong>Consumer:</strong> A consumer is the receiver of a message, it gets the message from the queue and processes it. Consumers can be multiple and they can run on different applications or servers.</li></ol><h3 id="2-2-Workflow"><a href="#2-2-Workflow" class="headerlink" title="2.2 Workflow"></a>2.2 Workflow</h3><p>The way RabbitMQ works is based on collaboration between producers, switches and queues. It is a simple messaging process:</p><ol><li>A queue is bound (<code>Binding</code>) to the switch, which defines the routing rules for the message;</li><li>the producer posts the message to the switch, which routes the message to one or more queues based on the binding rules;</li><li>consumers fetch messages from the queues and process them.</li></ol><p>This model is highly flexible and can easily handle a large number of messages while ensuring reliable delivery.</p><h3 id="2-3-Features"><a href="#2-3-Features" class="headerlink" title="2.3 Features"></a>2.3 Features</h3><p>When it comes to messaging middleware, the first thing that comes to mind is <code>Kafka</code>, but <code>RabbitMQ</code> is also the preferred choice of many financial or Internet companies to build reliable, scalable and high-performance systems.</p><p>Why is this?</p><p>It starts with the characteristics of RabbitMQ, which are twofold: one is power and the other is reliability!</p><p>RabbitMQ focuses on message reliability and flexibility, suitable for task queuing and messaging. RabbitMQ focuses on message reliability and flexibility, and is suitable for task queuing and messaging. Kafka is a distributed streaming platform that focuses on log storage and data distribution.</p><p>** Sequential consumption is also a type of reliability, RabbitMQ can use a single queue or multiple single queues to ensure sequential consumption. **</p><p>In addition to this, RabbitMQ provides persistent queues and messages to ensure that messages are not lost if the RabbitMQ server goes down. Additionally, producers can use the publish acknowledgement mechanism to confirm that a message has been received.</p><p>RabbitMQ relative to kafka reliability is better, the data is less likely to be lost, which for some data-sensitive business, obviously more suitable for the former.</p><p>And, RabbitMQ native support ** dead letter queue **, you can better deal with unfinished business messages, as well as the implementation of ** delayed queue ** and other features, the next we introduce one by one.</p><h2 id="3-Guaranteed-sequential-consumption"><a href="#3-Guaranteed-sequential-consumption" class="headerlink" title="3. Guaranteed sequential consumption"></a>3. Guaranteed sequential consumption</h2><p>RabbitMQ provides several queue models to guarantee sequential consumption of messages. This is important for certain applications such as order processing, payments, and inventory management.</p><h4 id="Scenarios-for-misordered-message-consumption"><a href="#Scenarios-for-misordered-message-consumption" class="headerlink" title="Scenarios for misordered message consumption"></a>Scenarios for misordered message consumption</h4><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/8ddf9fc781934910944aa265c8b80a1c%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As shown in the above figure, there are three business messages with <code>delete add update</code> operations, but the <code>Consumer</code> does not consume them in order, and they are eventually stored in the order of <code>Add Update Delete;</code>, and data misordering occurs.</p><p>RabbitMQ’s solution to the problem of message ordering is to ensure it in three stages.</p><h4 id="1-Sending-messages-into-the-queue"><a href="#1-Sending-messages-into-the-queue" class="headerlink" title="1. Sending messages: into the queue"></a>1. Sending messages: into the queue</h4><p>When messages are sent, business is needed to ensure orderliness, that is, to ensure that the order in which producers enter the queue is ordered.</p><p>In distributed scenarios if it is difficult to ensure that the order of each server into the queue, you can add distributed locks to solve the problem. Or in the business producer’s message with <code>Message Increment ID</code>, as well as the timestamp of the message generated.</p><h4 id="2-Messages-in-the-queue"><a href="#2-Messages-in-the-queue" class="headerlink" title="2. Messages in the queue"></a>2. Messages in the queue</h4><p>In RabbitMQ, messages are stored in a queue, and messages in the same queue are <code>First In First Out (FIFO)</code>, which <strong>RabbitMQ helps us to ensure the order</strong>.</p><p>RabbitMQ can’t guarantee the order of messages in different queues, just as we can’t guarantee that messages in different queues will be served before those in other queues, just as we can’t guarantee that messages in different queues will be served before those in other queues.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/3a474f6a70834d74b98354f3acf19507%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h4 id="3-Consumption-messages-out-of-queue"><a href="#3-Consumption-messages-out-of-queue" class="headerlink" title="3. Consumption messages: out of queue"></a>3. Consumption messages: out of queue</h4><p>In general, the sequential consumption after queue out is left to the consumer to guarantee. By guaranteeing the order of consumption, we also usually mean the order in which consumers consume messages.</p><p>** With multiple consumers, it is usually not possible to guarantee message order. **</p><p>This is equivalent to the case where we are in a queue for food, and there are multiple aunties who serve food, but each auntie does not serve food at the same speed, which corresponds to the different consumption capabilities of our consumers.</p><p>So, in order to ensure message ordering, we can use only one consumer to receive business messages.</p><p>It’s as if there is only one aunty who is cooking, so if she comes early, she will be able to cook earlier. But obviously, this is not very efficient, so we need to weigh the pros and cons when using it: <strong>see if the business needs sequentiality more, or if it needs consumption efficiency more</strong>.</p><h3 id="Priority-queues"><a href="#Priority-queues" class="headerlink" title="Priority queues"></a>Priority queues</h3><p>Another roundabout way to ensure sequential consumption is to use a Priority Queue.</p><p>After RabbitMQ 3.5, ** Priority Queue comes into effect when the number of consumers is low and if the server detects that a consumer is not able to consume messages in a timely manner. **</p><p>There are two specific prioritization strategies:</p><ol><li>set the priority of the queue</li><li>set the priority of the message</li></ol><p>When declaring a queue, we can set the maximum priority of the queue via the <code>x-max-priority</code> attribute, or set the priority of the message from 1 to 10 via the <code>Priority</code> attribute.</p><p>The Golang implementation code is as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">props := <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">string</span>]<span class="keyword">interface</span>&#123;&#125;)</span><br><span class="line"></span><br><span class="line">props[<span class="string">&quot;x-max-priority&quot;</span>] = <span class="number">10</span></span><br><span class="line"></span><br><span class="line">ch.Publish(</span><br><span class="line">   <span class="string">&quot;tizi365&quot;</span>,     </span><br><span class="line">   <span class="string">&quot;&quot;</span>, </span><br><span class="line">   <span class="literal">false</span>,</span><br><span class="line">   <span class="literal">false</span>,</span><br><span class="line">   amqp.Publishing&#123;</span><br><span class="line">       Priority:<span class="number">5</span>, </span><br><span class="line">       DeliveryMode:<span class="number">2</span>,  </span><br><span class="line">       ContentType: <span class="string">&quot;text/plain&quot;</span>,</span><br><span class="line">       Body:       []<span class="type">byte</span>(body),</span><br><span class="line">  &#125;)</span><br></pre></td></tr></table></figure><p>When priority queue consumption is in effect, ** will first consume the high priority messages in the high priority queue, so as to realize the sequential consumption **.</p><p>However, it should be noted that the conditions for priority queue triggering are relatively harsh, and it is best not to use it in cases where the order of business messages needs to be strictly guaranteed!</p><h2 id="4-Dead-Message-Queues"><a href="#4-Dead-Message-Queues" class="headerlink" title="4. Dead Message Queues"></a>4. Dead Message Queues</h2><p>In RabbitMQ, when a message becomes dead in the queue (Messages that consumers can’t process properly), it will be re cast to a switch (i.e., a dead letter switch), ** the consumption queue bound to the dead letter switch is the dead letter queue**.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/0973eb9589e7416a8295207c0f041d09%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h3 id="Generation-of-dead-letters"><a href="#Generation-of-dead-letters" class="headerlink" title="Generation of dead letters"></a>Generation of dead letters</h3><p>The following conditions need to be met for a dead message to be generated:</p><ol><li>the message has been manually rejected by the consumer and the <code>requeue</code> policy is False;</li><li>the message has expired (TTL). 3. the queue has reached its maximum length and the queue is not full;</li><li>the queue has reached its maximum length and the message cannot fit.</li></ol><h3 id="Steps-for-handling-dead-messages"><a href="#Steps-for-handling-dead-messages" class="headerlink" title="Steps for handling dead messages"></a>Steps for handling dead messages</h3><p>When a dead message is generated, if we define a <code>Dead letter switch;</code> (which is actually an ordinary switch, just used to handle dead messages, so it is called dead message switch), and then bind a queue on the dead message switch (called <code>dead letter queue</code>).</p><p>Finally, if there is a consumer listening to the dead letter queue, dead letter messages are handled as normal business messages, from the switch to the queue, and then consumed normally by <code>Dead message queue + message expiration</code> (the consumer listening to the dead letter queue).</p><h2 id="5-Delayed-Queues"><a href="#5-Delayed-Queues" class="headerlink" title="5. Delayed Queues"></a>5. Delayed Queues</h2><p>RabbitMQ itself does not support delayed queues, but we can use the RabbitMQ plugin <code>rabbitmq-delayed-message-exchange</code>, or use <code>Dead message queue + message expiration; to get a delayed queue. </code> are realized.</p><h3 id="5-1-Application-Scenarios"><a href="#5-1-Application-Scenarios" class="headerlink" title="5.1 Application Scenarios"></a>5.1 Application Scenarios</h3><p>When we shop in e-commerce, or buy tickets in 12306, we will probably encounter such a scenario: each time we place an order, there is a period of product lock time in the middle of the order to pay for the order, and the order will be closed if the order is not paid for after the time has passed **.</p><p>The state transition diagram is as follows:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/5312f7a0d5f54aa583ad9928e9bf56a6%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h3 id="5-2-Plugin-Implementation"><a href="#5-2-Plugin-Implementation" class="headerlink" title="5.2 Plugin Implementation"></a>5.2 Plugin Implementation</h3><h4 id="1-Install-the-plugin"><a href="#1-Install-the-plugin" class="headerlink" title="1. Install the plugin"></a>1. Install the plugin</h4><p><code>Github</code> address:</p><figure class="highlight ruby"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="symbol">https:</span>/<span class="regexp">/github.com/rabbitmq</span><span class="regexp">/rabbitmq-delayed-message-exchange/releases</span></span><br></pre></td></tr></table></figure><p>Download the <code>rabbitmq_delayed_message_exchange-3.8.9-0199d11c.ez</code> file from the github release page under assets, and place the file in the rabbitmq plugin directory (plugins directory).</p><blockquote><p>Tip: The version number may be different from this tutorial, if your rabbitmq is the latest version, just choose the latest version of the plugin.</p></blockquote><h4 id="2-Activate-the-plugin"><a href="#2-Activate-the-plugin" class="headerlink" title="2. Activate the plugin"></a>2. Activate the plugin</h4><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rabbitmq-plugins <span class="built_in">enable</span> rabbitmq_delayed_message_exchange</span><br></pre></td></tr></table></figure><h4 id="3-Define-the-exchange"><a href="#3-Define-the-exchange" class="headerlink" title="3. Define the exchange"></a>3. Define the exchange</h4><p>Set the custom switch attributes to support sending delayed messages via <code>x-delayed-type</code>:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">   props := <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">string</span>]<span class="keyword">interface</span>&#123;&#125;)</span><br><span class="line"></span><br><span class="line">   props[<span class="string">&quot;x-delayed-type&quot;</span>] = <span class="string">&quot;direct&quot;</span></span><br><span class="line"></span><br><span class="line">   err = ch.ExchangeDeclare(</span><br><span class="line">       <span class="string">&quot;delay.queue&quot;</span>,   </span><br><span class="line">       <span class="string">&quot;fanout&quot;</span>, </span><br><span class="line">       <span class="literal">true</span>,     </span><br><span class="line">       <span class="literal">false</span>,    </span><br><span class="line">       <span class="literal">false</span>,</span><br><span class="line">       <span class="literal">false</span>,</span><br><span class="line">       props,      </span><br><span class="line">  )</span><br></pre></td></tr></table></figure><h4 id="4-Send-delayed-messages"><a href="#4-Send-delayed-messages" class="headerlink" title="4. Send delayed messages"></a>4. Send delayed messages</h4><p>With the message header (x-delay), set the message delay time.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">       msgHeaders := <span class="built_in">make</span>(<span class="keyword">map</span>[<span class="type">string</span>]<span class="keyword">interface</span>&#123;&#125;)</span><br><span class="line"></span><br><span class="line">       msgHeaders[<span class="string">&quot;x-delay&quot;</span>] = <span class="number">6000</span></span><br><span class="line"></span><br><span class="line">       err = ch.Publish(</span><br><span class="line">           <span class="string">&quot;delay.queue&quot;</span>,     </span><br><span class="line">           <span class="string">&quot;&quot;</span>, </span><br><span class="line">           <span class="literal">false</span>,</span><br><span class="line">           <span class="literal">false</span>,</span><br><span class="line">           amqp.Publishing&#123;</span><br><span class="line">               Headers:msgHeaders, </span><br><span class="line">               ContentType: <span class="string">&quot;text/plain&quot;</span>,</span><br><span class="line">               Body:       []<span class="type">byte</span>(body),</span><br><span class="line">          &#125;)</span><br></pre></td></tr></table></figure><h3 id="5-3-Dead-Message-Queue-Message-Expiration-Scheme"><a href="#5-3-Dead-Message-Queue-Message-Expiration-Scheme" class="headerlink" title="5.3 Dead Message Queue + Message Expiration Scheme"></a>5.3 Dead Message Queue + Message Expiration Scheme</h3><p>The core idea of this scheme is to create dead message switches, queues and consumers to listen for dead messages.</p><p>Then create timed expired messages, for example, if the order payment time is 30min, set the <code>TTL</code> of the message to 30min, ** put the message into a queue with no consumers to consume it, and when the message expires, it will become a dead message. **</p><p>The dead message is resent to the dead message switch, then we consume the message in the dead message queue and determine if the item has been paid for based on the item ID.</p><p>If it has not been paid, the order is canceled and the order status is changed to <code>&amp;#x5F85;&amp;#x4E0B;&amp;#x5355;</code>. If it has been paid, modify the item status to <code>&amp;#x5DF2;&amp;#x5B8C;&amp;#x6210;</code> and drop this dead letter message.</p><h2 id="5-Summary"><a href="#5-Summary" class="headerlink" title="5. Summary"></a>5. Summary</h2><p>RabbitMQ is a powerful messaging middleware that plays a key role in many Internet applications, such as the Huawei Camera SDK’s surveillance image data reporting, asynchronous consumption in most e-commerce systems, and so on.</p><p>I hope that today’s article can help you better understand RabbitMQ and use it to build a reliable messaging system in your work, the next article ❤ will bring the core workflow of Kafka, the underlying principles and common interview questions, stay tuned!</p>]]></content>
    
    
    <summary type="html">RabbitMQ is an open source messaging middleware that implements the Advanced Message Queuing Protocol (AMQP) while providing a variety of important components to support the production, transport and consumption of messages.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="RabbitMQ" scheme="https://www.nablepart.com/tags/RabbitMQ/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="Message Queue" scheme="https://www.nablepart.com/tags/Message-Queue/"/>
    
    <category term="important" scheme="https://www.nablepart.com/tags/important/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="messages" scheme="https://www.nablepart.com/tags/messages/"/>
    
    <category term="consumption" scheme="https://www.nablepart.com/tags/consumption/"/>
    
  </entry>
  
  <entry>
    <title>A chart to understand the SQL execution process</title>
    <link href="https://www.nablepart.com/729eddb9596a/"/>
    <id>https://www.nablepart.com/729eddb9596a/</id>
    <published>2023-11-05T17:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h2><p>Recently, I realized that no matter it is the first time to enter the job market, or the development students who have been working for many years. Although they have been working with databases (especially MySQL) for a long time, they know little or nothing about the execution process of SQL statements.</p><p>The execution process of MySQL is really a complex process, which involves multiple components working together, so it is easy to fall into confusion and misunderstanding during the interview or work process.</p><h3 id="SQL-Execution-Process"><a href="#SQL-Execution-Process" class="headerlink" title="SQL Execution Process"></a>SQL Execution Process</h3><p>So, in this article, I’m going to take MySQL’s common InnoDB storage engine as an example to introduce you to the execution process of SQL statements in detail. Starting from the connector, all the way to the transaction commit and data persistence.</p><p>Let’s start with a diagram:</p><p><img src="https://s2.loli.net/2023/11/07/tIepN2gxRSKO7aT.webp"></p><p>First, the client connects to MySQL Server and sends an add, delete, change, or query statement. When the Server receives the statement, it creates a parse tree for optimization.</p><p>When the optimizer optimizes the statement, it will <strong>evaluate the cost of various indexes, choose the appropriate index</strong>, and then call the InnoDB engine interface through the executor to execute the statement.</p><h2 id="2-Specific-Execution-Flow"><a href="#2-Specific-Execution-Flow" class="headerlink" title="2. Specific Execution Flow"></a>2. Specific Execution Flow</h2><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/6a1d19d514ee4360aacd39293798fe0f~tplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><h3 id="1-Connection-Manager"><a href="#1-Connection-Manager" class="headerlink" title="1. Connection Manager"></a>1. Connection Manager</h3><p>The MySQL execution process begins with the connector. When a client requests a connection to MySQL, the connector is responsible for handling those connection requests.</p><p>It verifies the client’s identity and privileges, and then allocates a thread to handle the connection. <strong>MySQL creates a session</strong> for each connection thread, in which the client can send SQL statements to perform operations such as additions, deletions, and modifications.</p><h3 id="2-Parser"><a href="#2-Parser" class="headerlink" title="2. Parser"></a>2. Parser</h3><p>Once the connection is established, the client can send SQL statements to be executed.</p><p>These SQL statements are first sent to the parser, whose job is to <strong>parse the SQL statement, determine if it is syntactically correct</strong>, and convert it into an internal data structure for subsequent use by MySQL.</p><p>If the SQL statement has syntax errors, the parser will return an error message to the client.</p><h3 id="3-Optimizer"><a href="#3-Optimizer" class="headerlink" title="3. Optimizer"></a>3. Optimizer</h3><p>Once the SQL statement has been successfully parsed, the next step is to enter the realm of the optimizer.</p><p>The task of the optimizer is to evaluate different execution plans for the SQL statement and select the best one. It considers which indexes are available, which join methods are most efficient, and how to minimize the cost of the query.</p><h3 id="4-Executor"><a href="#4-Executor" class="headerlink" title="4. Executor"></a>4. Executor</h3><p>After the executor receives the execution plan generated by the optimizer, it starts executing the actual query operation.</p><p>The executor follows the steps in the execution plan, calling the InnoDB engine-level logic and fetching data from the data table, then performing <strong>sorting, aggregation, filtering</strong> and other operations.</p><p>Eventually, the executor returns the results to the client.</p><h3 id="5-write-undo-log"><a href="#5-write-undo-log" class="headerlink" title="5. write undo log"></a>5. write undo log</h3><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f4febee31fe24979b99fca77af386543%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>When an executor performs an operation that modifies data, MySQL’s InnoDB engine first opens a transaction to generate an undo log (also called a rollback log) for those modifications.</p><p>The rollback log is used to <strong>record data prior to modifications so that the original data can be recovered</strong> when the transaction is rolled back. If the transaction fails, MySQL can use the undo log to undo the changes that have been made.</p><h3 id="6-Record-Cache-Lookup-Indexes"><a href="#6-Record-Cache-Lookup-Indexes" class="headerlink" title="6. Record Cache, Lookup Indexes"></a>6. Record Cache, Lookup Indexes</h3><p>MySQL uses a record cache to store rows of data read from a data table. This <strong>cache speeds up access to frequently read data and avoids the overhead of having to read from disk each time</strong>.</p><p>When the data exists in memory, it only needs to be updated in memory; conversely, it may need to be read from disk and then updated on disk.</p><p>This depends on the type of indexes MySQL has, which can be categorized into two types:</p><ul><li>Unique indexes: index columns have unique values, null values are allowed for non-primary key unique indexes, and null values are not allowed for primary key indexes;</li><li>Ordinary indexes: no special restrictions, duplicate values and null values are allowed;</li></ul><p>When the SQL operation data reaches this step, InnoDB first determines whether the data page is in memory:</p><ul><li>In memory, determine whether the index being updated is a unique index. If it is a unique index, determine whether the update destroys the consistency of the data, if not, directly update the data page in memory; if it is a non-unique index, ** directly update the data page in memory**.</li><li>Not in memory: determine whether the updated index is a unique index. If it is a unique index, because of the need to ensure uniqueness after the update, you need to load the data page from disk to memory immediately, and then update the data page; if it is a non-unique index, record the operation of updating the data ** to the change buffer, which will be updated to disk asynchronously when it is idle **.</li></ul><h4 id="change-buffer"><a href="#change-buffer" class="headerlink" title="change buffer"></a>change buffer</h4><p>The <strong>change buffer is one of the features of the InnoDB engine</strong>. Before MySQL 5.5, the main purpose of the change buffer was to improve the performance of data insertion, which was also known as the insert buffer.</p><p>As we know, when a non-aggregated index is inserted, the data will be stored in the order of the primary key, so <strong>leaf nodes may need to access the data index page discretely, and the disk needs to be flushed each time the index page is updated</strong>. And each time reading and writing the disk will take a long time, so it leads to low insert performance.</p><p>While insert buffer is turned on, it will first determine whether the aggregated index page exists in the buffer pool, if it does, directly insert; if not, it will first be put into an insert buffer for sorting, and then merge (merge) to update the index page at a certain frequency.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/353d3b4a785d477e92372ce23e85df65%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As shown in the figure, the insert buffer combines multiple operations to reduce random I&#x2F;O and reduce disk interaction, thus improving overall performance.</p><p>Since MySQL 5.5, the type of buffer for data deletion and modification has been added gradually and is uniformly called change buffer.</p><p>**In a nutshell, a change buffer is mainly used to cache secondary index additions, deletions, and modifications (IDUs) to reduce random I&#x2F;O and to achieve the effect of merging operations. **</p><p>Unique indexes do not have a change buffer mechanism because they require immediate IO to disk to ensure that the data does not conflict.</p><h3 id="8-Write-redo-log"><a href="#8-Write-redo-log" class="headerlink" title="8. Write redo log"></a>8. Write redo log</h3><p>During SQL execution, InnoDB also records all data modifications to the redo log.</p><p>The redo log is a cyclically written log file that records each step of a transaction to ensure data persistence. If the system crashes, InnoDB can recover uncommitted transactions based on the redo log to maintain data consistency.</p><p>Note that <strong>redo log is divided into prepare and commit states</strong>. When InnoDB writes changes to a data page to the redo log during the execution of a transaction, its state is prepare.</p><h3 id="9-Writing-the-Binlog-and-Committing-the-Transaction"><a href="#9-Writing-the-Binlog-and-Committing-the-Transaction" class="headerlink" title="9. Writing the Binlog and Committing the Transaction"></a>9. Writing the Binlog and Committing the Transaction</h3><p>In addition to the redo log, MySQL also records a binlog.</p><p>The binary log records all executed SQL statements, not just data changes, which is important for data replication and recovery because it ensures that not only the state of the data is restored, but also the SQL operations that were performed.</p><p>When the InnoDB engine layer writes the redo log, it notifies the MySQL Server layer that the update operation has been executed. At this point, MySQL Server writes the executed SQL to the binlog and then notifies InnoDB that the redo log is in the commit state and the transaction was successfully committed.</p><p>Note that the success of a <strong>transaction commit is determined by whether or not it was written to the binlog</strong>. If it is written, even if MySQL Server crashes, you can recover from the redo log and binlog later.</p><h2 id="3-redo-log-and-binlog"><a href="#3-redo-log-and-binlog" class="headerlink" title="3. redo log and binlog"></a>3. redo log and binlog</h2><p>As mentioned above, when a transaction commits, there are two phases, which we will summarize:</p><ol><li>When the data is updated, the data page in memory is updated and the update operation is written to the redo log, which enters the prepare state. At this point, the redo log enters the prepare state and notifies MySQL Server that the update is complete and ready to be committed;</li><li>MySQL Server decides whether to write the updated SQL or data rows to the binlog according to whether the persistence mode is STATEMENT or ROW, and then calls InnoDB’s interface to set the redo log to the commit state and the update is complete.</li></ol><p>Careful students may ask, why binlog only need to commit once, but redo to commit twice? And why do we need binlog when we already have redo log?</p><p>To answer this question, you need to start with the essential difference between the two types of logs.</p><h3 id="redo-log"><a href="#redo-log" class="headerlink" title="redo log"></a>redo log</h3><p>The redo log is used to record transaction logs under the InnoDB engine, and supports self-healing of crashed data.</p><p>**If you write only binlog but not redo log, you may lose the data of the most recently executed transactions when MySQL goes down. **</p><h3 id="binlog"><a href="#binlog" class="headerlink" title="binlog"></a>binlog</h3><p>The binlog records all changes made to the database at the MySQL Server level for data archiving, data backup, master-slave replication, and so on.</p><p>If you write a redo log and commit it directly without going through the prepare phase, this process can be used to recover data from the redo log in the event of a failure if MySQL deploys a master-slave node, **the master can recover the data based on the redo log, but the slave nodes will not be able to synchronize this part of the data **.</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f0802711a680438fa9f0d4e9e9924123%7Etplv-k3u1fbpfcp-jj-mark%3A3024%3A0%3A0%3A0%3Aq75.awebp"></p><p>As you can see from the above figure, MySQL master-slave replication relies on the binlog of the Master node, the relay-log of the Slave node and 3 important threads.</p><h4 id="log-dump-thread"><a href="#log-dump-thread" class="headerlink" title="log dump thread"></a>log dump thread</h4><p>When a slave node connects to a master node, the master node creates a <strong>log dump thread for it to read and send binlog contents</strong>. While reading the binlog, the log dump thread locks the bin-log on the master node until the reading is complete and the lock is released.</p><p>The master node creates a log dump thread for each of its slave nodes.</p><h4 id="I-O-threads"><a href="#I-O-threads" class="headerlink" title="I&#x2F;O threads"></a>I&#x2F;O threads</h4><p>When a slave node binds to a master node, an <strong>I&#x2F;O thread is created to connect to the master node and request the binlog</strong> from the master repository.</p><p>When the log dump thread of the master is listened to, the I&#x2F;O thread saves the log to <strong>relay-log</strong>.</p><h4 id="SQL-thread"><a href="#SQL-thread" class="headerlink" title="SQL thread"></a>SQL thread</h4><p>The **SQL thread is responsible for listening to and reading the contents of the relay-log, parsing it into specific operations and replaying them so that they are consistent with the main database. ** After each execution the thread in question will sleep and wait for the next wakeup.</p><p>The slave database will detect whether the bin-log log of the master database has changed in a certain time interval, and if it has changed, it will start the IO thread and continue to execute the above steps.</p><h3 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h3><p>Thanks for reading and watching, see you in the next installment!</p><p>Feel free to like, share, and bookmark this page.</p>]]></content>
    
    
    <summary type="html">The task of the optimizer is to evaluate different execution plans for this SQL statement and choose the optimal one. It will consider which indexes are available, which join method is the most efficient, and how to minimize the cost of the query.</summary>
    
    
    
    <category term="Backend" scheme="https://www.nablepart.com/categories/Backend/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="MySQL" scheme="https://www.nablepart.com/tags/MySQL/"/>
    
    <category term="minimize" scheme="https://www.nablepart.com/tags/minimize/"/>
    
    <category term="evaluate" scheme="https://www.nablepart.com/tags/evaluate/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="optimizer" scheme="https://www.nablepart.com/tags/optimizer/"/>
    
  </entry>
  
  <entry>
    <title>What is asked in an interview at a large factory</title>
    <link href="https://www.nablepart.com/8c8e314b89ed/"/>
    <id>https://www.nablepart.com/8c8e314b89ed/</id>
    <published>2023-11-05T16:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>Table of Contents</p><ol><li>Background</li><li>Self Introduction &amp; Past Experience</li><li>Go Language Knowledge Points</li><li>Operating System &amp; Computer Composition Principles</li><li>Middleware Principles</li><li>Networking &amp; Distributed</li><li>Algorithms &amp; Data Structures</li><li>Summary</li></ol><h2 id="1-Background"><a href="#1-Background" class="headerlink" title="1. Background"></a>1. Background</h2><h3 id="1-1-Personal-Information"><a href="#1-1-Personal-Information" class="headerlink" title="1.1 Personal Information"></a>1.1 Personal Information</h3><p>Bachelor’s degree, 4 years of back-end development, 2.5 years of Go language practice. I like singing, dancing, Rap, basketball, etc 🐶.</p><p>Main programming languages are Go and Java, working with technology stacks such as MySQL, Oracle, Redis, RabbitMQ, Kafka, SpringBoot, Design Patterns, Networking, Microservices, Distributed and so on.</p><p>Interview purpose: Looking for a few experienced interviewers to see their level and also to understand the market situation.</p><h3 id="1-2-Interview-position"><a href="#1-2-Interview-position" class="headerlink" title="1.2 Interview position"></a>1.2 Interview position</h3><p>This interview is for a backend development engineer (Golang), and the position information is as follows:</p><p>Position temptation:</p><ol><li>Internet industry, giving higher than peer treatment, according to performance and other circumstances against the standard 16 salary. 2;</li><li>core positions, development space is huge;</li></ol><p>Position responsibilities:</p><ol><li>participate in the company’s Internet product design and development, architecture design;</li><li>Cooperate with the company’s strategy, complete the company and department OKR tasks;</li><li>Follow up the industry cutting-edge technology, continuous technical construction;</li></ol><p>Requirements: 1.</p><ol><li>Bachelor degree or above, with 2 years or above experience in Golang language programming and development;</li><li>familiar with the common commands of Linux, and have the ability to write basic shell script. 3. proficient in Go language;</li><li>proficient in Go language, familiar with network programming interfaces, familiar with TCP&#x2F;IP, UDP protocols, and have a deep understanding of network communication programming models. 4. proficient in common design patterns;</li><li>proficiency in common design patterns, proficiency in object-oriented programming methodology, and certain design capabilities;</li><li>proficient in using MySQL, good database design and rich experience in optimization;</li><li>familiarity with NoSQL technology (Redis, Memcached, etc.);</li><li>good technical architecture skills and project experience, good at finding technical problems and proposing solutions;</li><li>strong learning ability, good communication skills and teamwork spirit, able to work under pressure.</li></ol><p>It is not difficult to see from the above, the position and benefits and other information written in full, so the candidate company and HR are quite plus points 🐶.</p><h2 id="2-Self-introduction-past-experience"><a href="#2-Self-introduction-past-experience" class="headerlink" title="2. Self-introduction &amp; past experience"></a>2. Self-introduction &amp; past experience</h2><p>The interviews were held in the evening because of the on-the-job status, and the interviews were held at 8:00pm.</p><p>The interviewer might be at home, so he didn’t turn on the camera and explained it to me (this point is a plus, I hope that the interviewer of the candidate company is so polite, otherwise it will reduce the probability of being interviewed and passed 🐶).</p><p><img src="https://s2.loli.net/2023/11/07/5Cx2DOfP9Asab6X.webp"></p><p>**Interviewer from Candidate: Hi, I’m an interviewer from xx company, this interview is mainly about tech stack examination and some programming questions, can you introduce yourself first? **</p><p>Self-introduction: Uh, okay. Then roughly say your graduation school, major, time, projects, good skills, 3~5 minutes.</p><blockquote><p>PS: The interviewer of the candidate company tries to turn on the camera as much as possible, if you really can’t turn on the camera, you’d better explain it. You can introduce yourself at the beginning of the interview, and it will be a plus, even if it is a simple explanation.<br>In order to minimize the number of words in the Q&amp;A, the interviewer of the candidate company and the candidate will answer in the form of one question and one answer, i.e. Q&amp;A.</p></blockquote><p><strong>Q: Details of Project 1, what optimizations were done (I said I did system optimization when I introduced myself)</strong></p><p>A：MySQL high-end CRUD operation, Redis raging cache, Kafka delay bugs, etc., the eight stock text ready: <a href="https://link.juejin.cn/?target=https://mp.weixin.qq.com/s?__biz%25">MySQL summary</a> 3DMzI5Nzk2MDgwNg%3D%3D%26mid%3D2247484042%26idx%3D1%26sn%3D1620b3df43419745708f6f4c60a9ad9a%26chksm% 3Decac5683dbdbdf95197214ec82fe119ce0bc64c95b6806a8124a381f2c01131d41d6678b87e6%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26tokens 23wechat_redirect “<a href="https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&mid=2247484042&idx=1&sn=1620b3df43419745708f6f4c60a9ad9a&chksm">https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&amp;mid=2247484042&amp;idx=1&amp;sn=1620b3df43419745708f6f4c60a9ad9a&amp;chksm</a> &#x3D;ecac5683dbdbdf95197214ec82fe119ce0bc64c95b6806a8124a381f2c01131d41d6678b87e6&amp;token&#x3D;443585135&amp;lang&#x3D;zh_CN&amp;scene&#x3D;21#wechat_redirect” )</p><p>**Q: What are the specific responsibilities of Project 2 and why did you change jobs? **</p><p>A: The context of Project 2 is… A: The background of Project 2 is…, what value do I provide to users, what modules am I responsible for, what are my specific responsibilities, what are my effects or contributions, and what is the final result. You can <strong>use the SMART principle to state that the responsibilities are clear, the goals are clear, and the results are desirable</strong>.</p><p>The reason for changing jobs is that the current business has stabilized and I want to have more growth opportunities, and the technology and culture of your company is more attractive to me… I’m not sure if you’re a good person, but I’m a good person, and I’m not sure if you’re a good person! **</p><p>Compliment each other NB, by the way, prepare for why jump ship words and techniques, because of personal development, the work of geographical issues? Or is there a technical pursuit, want to learn more business and technical scenarios? In any case, you can not say some sensitive reasons, such as “more work, less money, the leader of the stupid hang” and so on.</p><blockquote><p>Although you do not intend to change jobs, you need to consider your reasons for jumping ship before the interview, so that the interviewer will know that you are “prepared”, otherwise you will have to “rat tail juice”.</p></blockquote><h2 id="3-Go-Language-Knowledge-Points"><a href="#3-Go-Language-Knowledge-Points" class="headerlink" title="3. Go Language Knowledge Points"></a>3. Go Language Knowledge Points</h2><p>**Q: Tell me the difference between make and new inside Go language? **</p><p>A: In Go, make and new are two very common keywords, and their differences are as follows:</p><ul><li>make can only be used to allocate and initialize data of types slice, map, and chan; new can allocate data of any type;</li><li>new allocations return pointers, i.e., type *Type; make returns references, i.e., Type;</li><li>new allocates space that is zeroed out, while make allocates space that is initialized.</li></ul><p>**Q: Introduce the scheduling model of Goroutine **</p><p>A: This article gives a good response: [GPM scheduling](<a href="https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&mid=2247484182&idx=1&sn=6d3f54eea5622a2d7f6323cbb553fdd8&chksm">https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&amp;mid=2247484182&amp;idx=1&amp;sn=6d3f54eea5622a2d7f6323cbb553fdd8&amp;chksm</a> &#x3D;ecac571fdbdbde09cc8beb982e5df0caafdf5c87587cd3fbd69ca86c33724e9368ab957beac3&amp;token&#x3D;443585135&amp;lang&#x3D;zh_CN&amp;scene&#x3D;21#wechat_redirect )</p><p><strong>Q: What happens when you read or write to a closed chan</strong>?</p><p>A: it will generate a panic, this question examines the basic use of chan, you need to understand the three situations in which chan generates a panic:</p><ul><li>Close an empty chan</li><li>Close the already closed Chan again</li><li>Send data to the already closed Chan</li></ul><p><strong>Q: The underlying structure of a slice</strong> **A: A slice is a variable-length object.</p><p>A: A slice is an array of variable length, and its data structure is defined as follows:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> slice <span class="keyword">struct</span> &#123;</span><br><span class="line">    array unsafe.Pointer</span><br><span class="line">    <span class="built_in">len</span>   <span class="type">int</span></span><br><span class="line">    <span class="built_in">cap</span>   <span class="type">int</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>where Pointer is a pointer to an array, len represents the length of the current slice, cap is the capacity of the current slice, and cap&gt;&#x3D;len.</p><h2 id="4-Operating-Systems-Computer-Composition-Principles"><a href="#4-Operating-Systems-Computer-Composition-Principles" class="headerlink" title="4. Operating Systems &amp; Computer Composition Principles"></a>4. Operating Systems &amp; Computer Composition Principles</h2><p><strong>Q: Why is the overhead of process switching greater than that of thread switching</strong>?</p><p>A: Process is the basic unit of resource allocation. Process switching involves refreshing the heap memory, replacing the global catalog of page tables, and switching the virtual address space.</p><p>Threads, on the other hand, are essentially just a group of processes sharing resources, so all threads of a process will share the virtual address space, which can <strong>save the virtual address space switching</strong>. And, when storing memory contexts, threads only need to save the data in <strong>registers and program counters</strong>.</p><blockquote><p>If you still don’t understand the difference between processes and threads, continue to this article:[GPM Scheduling](<a href="https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&mid=2247484182&idx=1&sn=6d3f54eea5622a2d7f6323cbb553fdd8&chksm">https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&amp;mid=2247484182&amp;idx=1&amp;sn=6d3f54eea5622a2d7f6323cbb553fdd8&amp;chksm</a> &#x3D;ecac571fdbdbde09cc8beb982e5df0caafdf5c87587cd3fbd69ca86c33724e9368ab957beac3&amp;token&#x3D;443585135&amp;lang&#x3D;zh_CN&amp;scene&#x3D;21#wechat_redirect )</p></blockquote><p><strong>Q: What’s the difference between userspace and kernel space</strong></p><p>A: Kernel space is the area that the <strong>operating system kernel</strong> accesses. When accessing external devices such as disks, network cards, etc., you must first load the data into kernel space, so kernel space is a protected memory space.</p><p>The <strong>user space is a memory area accessible to common applications</strong> and cannot be directly loaded with data from external devices.</p><p>In the early days of operating systems, there was no distinction between kernel space and user space, so applications (such as writing a script) could access arbitrary memory space, which could pose some security risks: for example, if you wrote a for loop to send data, you could exhaust the entire server’s disk network and other resources!</p><p><strong>Q: The difference between the send and sendfile data transfer functions under Linux</strong>.</p><p>A: This is a network IO problem under Linux. Among them, send adopts the traditional standard IO method based on data copy, and the client needs to call the following functions when transferring data:</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">buffer = <span class="built_in">File</span>.read;</span><br><span class="line">Socket.<span class="built_in">send</span>(buffer)</span><br></pre></td></tr></table></figure><p>When the send function is called, the computer takes four steps to get the data and transmit it to the network:</p><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/976db374271649658327403113ca8400%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><ol><li>copying disk data to the kernel-space buffer page cache via a DMA Copy operation;</li><li>copy data from kernel space to the application cache in user space via a CPU Copy operation;</li><li>copy user-space memory data to the kernel-space Socket Network Send Cache via CPU Copy operation;</li><li>copy the data from the Socket Cache to the NIC through DMA Copy operation, and then the NIC carries out network transmission.</li></ol><p>From the above, we can see that send() needs to perform 4 context switches, 2 DMA and CPU data copy operations, which is a waste of resources and inefficient.</p><p>sendfile() is an implementation of the <strong>zero-copy</strong> technique, which avoids copying data to the user space (steps 2 and 3 in the above figure) when the application (user space) does not need to access the data during transmission and instead performs the CPU copy directly in the kernel space.</p><blockquote><p>Zero-copy: This refers to the fact that when a computer transfers data, it eliminates some unnecessary data copying operations in order to save the number of data copying and sharing bus operations, thus improving the efficiency of data transfer.</p></blockquote><p><img src="https://raw.githubusercontent.com/zqwuming/blogimage/img/img/f8af04c7072c41f695e84a0fb7744de5%7Etplv-k3u1fbpfcp-zoom-in-crop-mark%3A1512%3A0%3A0%3A0.awebp"></p><p>Linux introduced sendfile() in version 2.1 in order to improve the efficiency of data transfer. As you can see from the above figure, the zero-copy method during data transfer shortens the overall calling process from 4 to 3 steps and reduces the context switching between kernel space and user space.</p><p>However, the data transfer still needs to be copied in kernel space, can this step be omitted as well?</p><p>The answer is yes! In version 2.4 of Linux, the DMA Gather technique allows sendfile() to omit the CPU Copy operation as well (step 2 in the above figure), thus realizing zero-copy in the true sense of the word.</p><h2 id="5-Database-Knowledge-Examination"><a href="#5-Database-Knowledge-Examination" class="headerlink" title="5. Database Knowledge Examination"></a>5. Database Knowledge Examination</h2><p>**Q: What is phantom read, under what circumstances will appear, how to solve **</p><p>A: Phantom reads are a major problem at the Repeatable Read (RR) isolation level, but they occur at all isolation levels except <strong>Serializable</strong>. Phantom reads can be solved by changing MySQL’s isolation level to Serializable, but transactions at this isolation level can only be executed sequentially, which is expensive, low performance, and rarely used.</p><p>Another solution to phantom reads is to use a gap lock or a critical key lock (gap lock + record lock) under <strong>current read</strong>, or multi-version concurrency control (MVCC) under <strong>snapshot read</strong>.</p><p>Under MySQL’s InnoDB, all are current reads except for non-blocking query statements like <code>select xx from table</code>, which are snapshot reads, such as:</p><p><code>select + for update</code></p><p><code>select + lock in share mode</code></p><p><code>udpate/insert/delete... </code></p><blockquote><p>For those unfamiliar with isolation levels and MVCC, check out this article: [MySQL summary](<a href="https://link.juejin.cn/?target=https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==%25">https://link.juejin.cn?target=https%3A%2F%2Fmp.weixin.qq.com%2Fs%3F__biz%3DMzI5Nzk2MDgwNg%3D%3D%</a> 26mid%3D2247484042%26idx%3D1%26sn%3D1620b3df43419745708f6f4c60a9ad9a%26chksm%26 3Decac5683dbdbdf95197214ec82fe119ce0bc64c95b6806a8124a381f2c01131d41d6678b87e6%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26token%3D443585135%26lang%3Dzh_CN%26scene%3D21%26tokens 23wechat_redirect “<a href="https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&mid=2247484042&idx=1&sn=1620b3df43419745708f6f4c60a9ad9a&chksm">https://mp.weixin.qq.com/s?__biz=MzI5Nzk2MDgwNg==&amp;mid=2247484042&amp;idx=1&amp;sn=1620b3df43419745708f6f4c60a9ad9a&amp;chksm</a> &#x3D;ecac5683dbdbdf95197214ec82fe119ce0bc64c95b6806a8124a381f2c01131d41d6678b87e6&amp;token&#x3D;443585135&amp;lang&#x3D;zh_CN&amp;scene&#x3D;21#wechat_redirect” ), knowledge points about MySQL locks will be added in subsequent articles in the public.</p></blockquote><p><strong>Q: Distributed locking know, how to do distributed locking with Redis</strong></p><ul><li><p>A: In a multi-threaded environment, in order to ensure that a piece of code can only be accessed by one thread at the same time, if it is a standalone service, we can solve the problem by using a local lock. But in a distributed scenario, such as deploying 3 servers, how to ensure that a timed task can only be completed by a thread on one server?</p><p>The answer is distributed locks. Distributed locks add locks when a thread accesses the code area and release the locks when the access is complete, which generally has the following characteristics:</p><ul><li>Mutual exclusivity: at the same moment, only one thread can hold the lock;</li><li>Re-entry: a thread at the same node that has already acquired the lock can repeatedly acquire the lock while it is unreleased;</li><li>Avoid deadlocks: even if the server that added the lock is down, the lock is guaranteed to be released;</li><li>High-performance, high-availability: efficient locking and unlocking, and to ensure high availability, to prevent distributed lock failure.</li></ul><p>Redis implements distributed locking by setting an expiration time with the SET command, which is equivalent to the SETNX command:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">set key value <span class="selector-attr">[EX seconds]</span><span class="selector-attr">[PX milliseconds]</span><span class="selector-attr">[NX]</span><span class="selector-attr">[XX]</span></span><br></pre></td></tr></table></figure><ul><li>EX seconds: set the expiration time in seconds.</li><li>PX milliseconds: Set expiration time in milliseconds.</li><li>NX: Set value only if key does not exist.</li><li>XX: Set value only if key exists.</li></ul><p>Redis is implemented with several features to maintain distributed locking:</p><ul><li>Mutual exclusivity: value must be unique;</li><li>Re-entry: depending on the characteristics of different threads, such as multiple servers executing the same piece of code, you can use the server’s information (e.g., IP) as the unique value of the lock, and let the server determine whether it is reentrant based on the value information;</li><li>Avoid deadlocks: add an expiration time to the lock, so that even if the server executing the code crashes, that piece of data in Redis will be deleted periodically;</li><li>High-performance, highly available: Redis uses a cluster model, multi-node deployment to ensure high availability; Redis’s distributed locking method is better than the MySQL or zookeeper locking method.</li></ul></li></ul><h2 id="6-Networking-Distributed"><a href="#6-Networking-Distributed" class="headerlink" title="6. Networking &amp; Distributed"></a>6. Networking &amp; Distributed</h2><p><strong>Q: Introduce the sliding window</strong> of TCP</p><p>A: When the client and the server are doing TCP transmission, in order to increase the efficiency of the transmission, so the concept of <strong>window</strong> is introduced, that is: **Acknowledgement response is not transmitted in each maximum message segment, but in the form of a window <strong>Acknowledgement</strong>.</p><p>We know that when the two ends of the communication in the data transmission, TCP in order to ensure that the data is not discarded, so each message segment needs to be confirmed by the receiver before the sender can continue to send the next message segment, which is equivalent to the **blocking without buffer waiting **. However, the performance of the communication is low due to the different sending and consuming frequencies at the two ends of the communication.</p><p>The sliding window mechanism solves this problem, where the window size is the maximum value of data that the sender can continue to send. The implementation of sliding window, the use of buffering mechanism, you can confirm multiple segments at the same time, equivalent to adding a buffer, you can let the sender to send multiple message segments at a time, the consumer side is not so busy before processing.</p><p>Moreover, with the sliding window mechanism, when sending multiple data segments, if you receive an answer from a data segment, it means that the previous data segment has been successfully processed. For example: when the client sends 100,101,102 three data segments, ** if the server receives an ACK answer from 101, it means that the data before 102 has been received**, which can avoid the problem of repeated sending of data segments when one end does not receive an ACK answer.</p><p><strong>Q: Introduce the HTTPs</strong></p><p>A: You can read this article, which describes it very clearly: </p><p>**Q: There are a lot of unidentified certificates on the browser, what are the risks if we click on trust? **</p><p>A: There will be a risk of <strong>man-in-the-middle attack</strong>. For example, a program with ulterior motives may add root certificates in the dark, and after we click trust it will intercept the TLS handshake request, and then send a temporary certificate signed by itself to the client, and then disguise itself as a browser client to create a connection with the server and do the subsequent negotiation of the secret key.</p><p>**Q: Symmetric encryption and asymmetric encryption</p><p>A: Symmetric encryption, in which both parties to a message use the same key to encrypt and decrypt the message, uses symmetric cryptographic coding techniques. Since the algorithm is public, the key cannot be disclosed to the public. It is less computationally intensive and encrypts quickly, with the disadvantage of insecurity and difficulty in key management, such as AES, IDEA.</p><p>Asymmetric encryption, can only be encrypted and decrypted by pairs of public and private keys, usually public key encryption, private key decryption. Process: Party A generates a pair of keys and discloses one of them as the public key. Party B gets the public key, encrypts the data and sends it to Party A, who decrypts it with a special private key. Asymmetric encryption is relatively safe, but the encryption speed is slow, such as RSA algorithm (RSA supports private key encryption, public key decryption).</p><p><strong>Q: Raft protocol understand it, introduce it</strong></p><p>A: Raft protocol divides distributed nodes into three categories, which are:</p><ul><li>Leader node Leader</li><li>Slave node Follower</li><li>Candidate node Candidate</li></ul><p>There are three important scenarios when raft protocol works.</p><p><strong>1) Leader becomes a Candidate</strong></p><p>Each Follower receives periodic heartbeats from the Leader, typically 150~300ms, if no heartbeat packet is received after a period of time, the Follower becomes Candidate.</p><p>(<strong>2) Candidate runs for Leader</strong>.</p><p>After Follower becomes Candidate, it starts to send voting message to all other surviving nodes, other nodes will reply to its request, if more than half of the nodes reply to the campaign request, then the Candidate will become Leader node. If there is a tie, then each node sets a random time after which the campaign starts and all nodes vote again.</p><p>**3) New Leader Starts Working</p><p>The new Leader periodically sends heartbeat packets to the Follower, and the Follower receives the heartbeat packets and re-times itself. At this time, if the Leader receives the client request, it will write the data change into the log and copy the data to all Follower, and when most of the Follower make the change, it will submit the data change operation. The Leader will then notify all Follower to commit the changes, at which point all nodes agree on the data.</p><ul><li><h2 id="7-Algorithms-and-Data-Structures"><a href="#7-Algorithms-and-Data-Structures" class="headerlink" title="7. Algorithms and Data Structures"></a>7. Algorithms and Data Structures</h2><p>**Q: What is the time complexity of fast sorting **</p><p>A: O(nlogn) ~ O(n^2)</p><p><strong>Q: What’s the worst case of time complexity for fast scheduling</strong></p><p>A: The worst case is when the middle number chosen each time is the smallest or largest element of the current sequence, which makes the recursion as many times as the length of the array.</p><p><strong>Q: Given a two-dimensional matrix of mxn, write an efficient algorithm to determine whether a target value exists in the matrix</strong> The elements in the matrix have the following characteristics:</p><ul><li>The elements are incremented row by row, e.g. [1,3,8].</li><li>The elements are increasing column by column, e.g. [2,5,9].</li></ul></li></ul><p><img src="https://s2.loli.net/2023/11/07/EGAVsTLWmIboze8.webp"></p><p>A: This question is not difficult, it is essentially a “matrix version” of binary lookup, which is also the original LeetCode question #74, implemented in Go:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">searchMatrix</span><span class="params">(matrix [][]<span class="type">int</span>, target <span class="type">int</span>)</span></span> <span class="type">bool</span> &#123;</span><br><span class="line">   <span class="keyword">if</span> <span class="built_in">len</span>(matrix) == <span class="number">0</span> || <span class="built_in">len</span>(matrix[<span class="number">0</span>]) == <span class="number">0</span> &#123;</span><br><span class="line">       <span class="keyword">return</span> <span class="literal">false</span></span><br><span class="line">  &#125;</span><br><span class="line">   m, n := <span class="built_in">len</span>(matrix), <span class="built_in">len</span>(matrix[<span class="number">0</span>])</span><br><span class="line"></span><br><span class="line">   i := sort.Search(m*n, <span class="function"><span class="keyword">func</span><span class="params">(i <span class="type">int</span>)</span></span> <span class="type">bool</span> &#123;</span><br><span class="line">       <span class="keyword">return</span> matrix[i/n][i%n] &gt;= target</span><br><span class="line">  &#125;)</span><br><span class="line">   <span class="keyword">return</span> i </span><br></pre></td></tr></table></figure><p>Q: Given an array, disorganize the elements of the array a bit so that each element appears in a completely random location (each location has exactly equal probability of appearing). <strong>Requires space complexity of O(1) and time complexity of O(n)</strong> .</p><p>For example, [1, 2, 3, 4, 5] &#x3D;&#x3D;&gt; [2, 4, 1, 5, 3], where the probability of an element appearing in each position is the same.</p><p>A: Go code implementation:</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">func randSort(arr <span class="selector-attr">[]</span>int) &#123;</span><br><span class="line">   n := <span class="built_in">len</span>(arr)</span><br><span class="line">   for i:=<span class="number">0</span>; ii++ &#123;</span><br><span class="line">       rd := i+rand.<span class="built_in">Intn</span>(n-i)</span><br><span class="line">       arr[i], arr[rd] = arr[rd], arr[i]</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>**Q: Given a unidirectional chained table, determine whether there are rings in the chained table **</p><p>A: LeetCode Original Question - Ring Chained Table, Go Language Code Implementation:</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> Node <span class="keyword">struct</span> &#123;</span><br><span class="line">    Value <span class="type">int</span></span><br><span class="line">    Next  *Node</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">hasCycle</span><span class="params">(head *Node)</span></span> <span class="type">bool</span> &#123;</span><br><span class="line">    slow, fast := head, head</span><br><span class="line">    <span class="keyword">for</span> fast != <span class="literal">nil</span> &amp;&amp; fast.Next != <span class="literal">nil</span> &#123;</span><br><span class="line">        slow = slow.Next</span><br><span class="line">        fast = fast.Next.Next</span><br><span class="line">        <span class="keyword">if</span> fast == slow &#123;</span><br><span class="line">            <span class="keyword">return</span> <span class="literal">true</span></span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> <span class="literal">false</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="8-Summary"><a href="#8-Summary" class="headerlink" title="8. Summary"></a>8. Summary</h2><p>As you can see, the questions on one side are relatively basic, like those involving networks, group principles, and distributions are not deep, especially the algorithmic questions, which are basically LeetCode simple or moderately difficult questions. But this is also a feature of the technical interviews of ** big factories, may not be deep, but more comprehensive. **</p><p>However, for candidates, sometimes stuck on a certain knowledge point, the subsequent questions and answers may be more difficult, because some big factory interviewers will seize your weak points, and keep asking until you will not. The most important thing is that we all understand the Internet market today, and if you answer the question well, you may be dropped by the KPI.</p><p>Therefore, we have to seize the opportunity to prepare well before the interview, in the technical interview ** try to answer the highlights, as far as possible to answer the comprehensive, and can be cited to reflect three **. For example: when asked about processes and threads, you can mention Go’s co-processing and talk about the difference between the three underlying mechanisms. When asked about memory management, you can talk about Go’s own implementation mechanism (TCMalloc, etc.), and then talk about memory overflow, optimization, and other scenarios in conjunction with problems in the workplace.</p>]]></content>
    
    
    <summary type="html">Interviewing is a tense and interesting process; tense because of the importance, interesting because of the unpredictable results. Sometimes, with a glance or a word, the interviewer will decide whether you will stay or go. So, is it like that in the interviews of big Internet companies?</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend" scheme="https://www.nablepart.com/tags/Backend/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Interview" scheme="https://www.nablepart.com/tags/Interview/"/>
    
    <category term="Go" scheme="https://www.nablepart.com/tags/Go/"/>
    
    <category term="companies" scheme="https://www.nablepart.com/tags/companies/"/>
    
    <category term="importance" scheme="https://www.nablepart.com/tags/importance/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Integrating Nacos-Configuration Pull and Configuration Change Information Ding Subscription</title>
    <link href="https://www.nablepart.com/efd5fa6af283/"/>
    <id>https://www.nablepart.com/efd5fa6af283/</id>
    <published>2023-11-05T13:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>In the above article, we have roughly completed the service registration function of the registration center, in this article we will implement the configuration center configuration pull and configuration change listening function. Still first need to define a configuration center interface to initialize the configuration center configuration and configuration center information change listening event.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">ConfigCenter</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">init</span><span class="params">(String serverAddr, String env)</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">subscribeRulesChange</span><span class="params">(RulesChangeListener listener)</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">public interface RulesChangeListener &#123;</span><br><span class="line"></span><br><span class="line">    <span class="type">void</span> <span class="title function_">onRulesChange</span><span class="params">(List rules)</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After completing the simple interface definition, start thinking about how to implement the specific configuration pull. We still need to introduce the Nacos client first.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">com.alibaba.nacos</span><br><span class="line">nacos-client</span><br><span class="line"><span class="number">2.0</span>.<span class="number">4</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After that, nacos has provided us with configService, a class that helps us quickly pull Nacos configurations. The usage is as follows:</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">private <span class="type">static</span> final String DATA_ID = <span class="string">&quot;api-gateway&quot;</span>;</span><br><span class="line"></span><br><span class="line">private String serverAddr;</span><br><span class="line"></span><br><span class="line">private String env;</span><br><span class="line"></span><br><span class="line">private ConfigService configService;</span><br><span class="line"></span><br><span class="line">@Override</span><br><span class="line">public <span class="type">void</span> <span class="title function_">init</span><span class="params">(String serverAddr, String env)</span> &#123;</span><br><span class="line">    this.serverAddr = serverAddr;</span><br><span class="line">    this.env = env;</span><br><span class="line"></span><br><span class="line">    try &#123;</span><br><span class="line">        this.configService = NacosFactory.createConfigService(serverAddr);</span><br><span class="line">    &#125; catch (NacosException e) &#123;</span><br><span class="line">        throw new RuntimeException(e);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>The above has completed the initialization of the configuration center, then we can use the methods provided by the configuration center to pull our configuration</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">String configJson = configService.getConfig(DATA_ID, env, <span class="number">5000</span>);</span><br><span class="line"></span><br><span class="line"><span class="built_in">log</span>.info(<span class="string">&quot;config from nacos: &#123;&#125;&quot;</span>, configJson);</span><br><span class="line">List rules = JSON.parseObject(configJson).getJSONArray(<span class="string">&quot;rules&quot;</span>).toJavaList(Rule.class);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Here you can parse your configuration any way you like.</p><p>As usual, we also need to subscribe to the configuration change event. The method is as follows:</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br></pre></td><td class="code"><pre><span class="line">@Override</span><br><span class="line">   public <span class="type">void</span> <span class="title function_">subscribeRulesChange</span><span class="params">(RulesChangeListener listener)</span> &#123;</span><br><span class="line">       try &#123;</span><br><span class="line"></span><br><span class="line">           String configJson = configService.getConfig(DATA_ID, env, <span class="number">5000</span>);</span><br><span class="line"></span><br><span class="line">           <span class="built_in">log</span>.info(<span class="string">&quot;config from nacos: &#123;&#125;&quot;</span>, configJson);</span><br><span class="line">           List rules = JSON.parseObject(configJson).getJSONArray(<span class="string">&quot;rules&quot;</span>).toJavaList(Rule.class);</span><br><span class="line"></span><br><span class="line">           listener.onRulesChange(rules);</span><br><span class="line"></span><br><span class="line">           configService.addListener(DATA_ID, env, new Listener() &#123;</span><br><span class="line"></span><br><span class="line">               @Override</span><br><span class="line">               public Executor getExecutor() &#123;</span><br><span class="line">                   <span class="keyword">return</span> null;</span><br><span class="line">               &#125;</span><br><span class="line"></span><br><span class="line">               @Override</span><br><span class="line">               public <span class="type">void</span> receiveConfigInfo(String configInfo) &#123;</span><br><span class="line">                   <span class="built_in">log</span>.info(<span class="string">&quot;config from nacos: &#123;&#125;&quot;</span>, configInfo);</span><br><span class="line">                   List rules = JSON.parseObject(configInfo).getJSONArray(<span class="string">&quot;rules&quot;</span>).toJavaList(Rule.class);</span><br><span class="line">                   listener.onRulesChange(rules);</span><br><span class="line">               &#125;</span><br><span class="line">           &#125;);</span><br><span class="line"></span><br><span class="line">       &#125; catch (NacosException e) &#123;</span><br><span class="line">           throw new RuntimeException(e);</span><br><span class="line">       &#125;</span><br><span class="line">   &#125;</span><br></pre></td></tr></table></figure><p>More to the point is this line of code</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">configService.addListener(DATA_ID, env, new Listener()</span><br></pre></td></tr></table></figure><p>He will help us add one of our listeners to the list of Nacos listeners, so that when the configuration of Nacos changes, we can listen to the event and execute the processing logic we want.</p><p>At this point, the integration of Nacos is simply complete. This module requires a deep understanding of the Nacos source code and interface, so it is recommended that you finish learning Nacos before reading this series of articles.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a Gateway from 0 to 1 Integrating Nacos - Service Registration and Service Subscription Implementation</title>
    <link href="https://www.nablepart.com/db480acecb11/"/>
    <id>https://www.nablepart.com/db480acecb11/</id>
    <published>2023-11-05T12:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>You can get the project source code and project information from the video introduction <a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">link to effect demo</a></p><h1 id="Nacos"><a href="#Nacos" class="headerlink" title="Nacos"></a>Nacos</h1><p>Nacos provides a number of powerful features: service discovery, health check. Nacos supports both DNS-based and RPC-based service discovery. Nacos also provides real-time health checking of services, preventing requests from being sent to unhealthy hosts and services. And Nacos also provides a visual console to facilitate the management of instances and other information. Nacos also provides a dynamic configuration service that allows us to manage the application and service configuration of all environments in a centralized, externalized and dynamic way.</p><p><img src="https://s2.loli.net/2023/11/05/ZDFN5KfolqXx1Ih.webp"></p><p>Nacos is one of the registration and configuration centers that I use the most in the process of developing my own projects, and the community of Nacos is more active and the code is easier to read than others. Here is the official Nacos website: <a href="https://nacos.io/zh-cn/">Nacos official website</a> I won’t go into too much detail about the features of Nacos in this article. In this chapter, I will use the interfaces exposed by Nacos to accomplish the service registration function and service discovery function of the project.<br>Completing this chapter will also give you a deeper understanding of the underlying principles of Nacos, the registry principle. The following are some of the articles I have written in the process of learning Nacos, interested can look at. [Using Nacos to implement dynamic thread pooling techniques and Nacos configuration file update listener events](<a href="https://blog.csdn.net/Zhangsama1/article/details/131227567?ops_request_misc=%257B%2522request%25">https://blog.csdn.net/Zhangsama1/article/details/131227567?ops_request_misc=%257B%2522request%</a> 255Fid%2522%253A%2522169812851116800213040303%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id &#x3D;169812851116800213040303&amp;biz_id&#x3D;0&amp;utm_medium&#x3D;distribute.pc_search_result.none-task-blog-2<del>blog</del>first_rank_ecpm_v1<del>rank_v31_ecpm-1 -131227567-null-null.nonecase&amp;utm_term&#x3D;Nacos&amp;spm&#x3D;1018.2226.3001.4450) [[Source Code Analysis] How does Nacos use the AP protocol to accomplish data synchronization between servers?] (<a href="https://link.juejin.cn/?target=https://blog.csdn.net/Zhangsama1/article/details/132143057?ops_request_misc=%25">https://link.juejin.cn?target=https%3A%2F%2Fblog.csdn.net%2FZhangsama1%2Farticle%2Fdetails%2F132143057%3Fops_request_misc%3D%</a> 25257B%252522request%25255Fid%252522%25253A%252522169812851116800213040303%252522%25252C%252522scm%252522%25253A%25253A 25252220140713.130102334.pc%25255Fblog.%252522%25257D%26request_id%3D169812851116800213040303%26biz_id%3D0%26utm_medium%26 3Ddistribute.pc_search_result.none-task-blog-2</del>blog<del>first_rank_ecpm_v1</del>rank_v31_ecpm-4-132143057-null-null.nonecase%26utm_term% 3DNacos%26spm%3D1018.2226.3001.4450 “<a href="https://blog.csdn.net/Zhangsama1/article/details/132143057?ops_request_misc=%257B%2522request%25">https://blog.csdn.net/Zhangsama1/article/details/132143057?ops_request_misc=%257B%2522request%</a> 255Fid%2522%253A%2522169812851116800213040303%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id &#x3D;169812851116800213040303&amp;biz_id&#x3D;0&amp;utm_medium&#x3D;distribute.pc_search_result.none-task-blog-2<del>blog</del>first_rank_ecpm_v1<del>rank_v31_ecpm-4 -132143057-null-null.nonecase&amp;utm_term&#x3D;Nacos&amp;spm&#x3D;1018.2226.3001.4450”) [[Source Code Analysis] How does the Nacos server side update as well as save registry information?] (<a href="https://blog.csdn.net/Zhangsama1/article/details/132141120?ops_request_misc=%257B%2522request%255Fid%2522%253A%25">https://blog.csdn.net/Zhangsama1/article/details/132141120?ops_request_misc=%257B%2522request%255Fid%2522%253A%</a> 2522169812851116800213040303%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id&#x3D; 169812851116800213040303&amp;biz_id&#x3D;0&amp;utm_medium&#x3D;distribute.pc_search_result.none-task-blog-2</del>blog<del>first_rank_ecpm_v1</del>rank_v31_ecpm-5- 132141120-null-null.nonecase&amp;utm_term&#x3D;Nacos&amp;spm&#x3D;1018.2226.3001.4450) [Implementation of Nacos auto-registration principle as well as service registration update and how to save it to the registry](<a href="https://link.juejin.cn/?target">https://link.juejin.cn?target</a> &#x3D;https%3A%2F%2Fblog.csdn.net%2FZhangsama1%2Farticle%2Fdetails%2F132145216%3Fops_request_misc%3D%25257B%252522request%25255Fid% 252522%25253A%2522169812851116800213040303%252522%25252C%252522scm%252522%25253A%252220140713.130102334.pc%25255Fblog.%252522%252522%252522 25257D%26request_id%3D169812851116800213040303%26biz_id%3D0%26utm_medium%3Ddistribute.pc_search_result.none-task-blog-2<del>blog</del>first _rank_ecpm_v1<del>rank_v31_ecpm-6-132145216-null-null.nonecase%26utm_term%3DNacos%26spm%3D1018.2226.3001.4450 “<a href="https://blog.csdn.net/">https://blog.csdn.net/</a> Zhangsama1&#x2F;article&#x2F;details&#x2F;132145216?ops_request_misc&#x3D;%257B%2522request%255Fid%2522%253A%2522169812851116800213040303%2522%252C% 2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id&#x3D;169812851116800213040303&amp;biz_id&#x3D;0&amp;utm_medium&#x3D;distribute .pc_search_result.none-task-blog-2</del>blog<del>first_rank_ecpm_v1</del>rank_v31_ecpm-6-132145216-null-null.nonecase&amp;utm_term&#x3D;Nacos&amp;spm&#x3D; 1018.2226.3001.4450”)</p><p>Why I chose Nacos was briefly explained in a previous post, so I’ll list a few reasons in detail here:</p><ul><li>Nacos provides allows me to manage all services and metadata in the data center from the perspective of building a microservices platform, the specific reasons you can look at the above my analysis of the Nacos source code, Nacos will be the services of fine-grained division into their respective instances, and we can manage the information of these instances</li><li>Nacos supports both DNS-based and RPC-based service discovery, which means that it provides us with strong service discovery options.</li><li>Nacos provides real-time health checking of services, preventing requests from being sent to unhealthy hosts or service instances, i.e., security.</li><li>Dynamic configuration services allow you to manage application configuration and service configuration across all environments in a centralized, externalized, and dynamic way, which I’ve leveraged before for dynamic configuration of thread pools [check out this article for more details] (<a href="https://blog.csdn.net/Zhangsama1/article/details/">https://blog.csdn.net/Zhangsama1/article/details/</a> 131227567?spm&#x3D;1001.2014.3001.5502)</li></ul><h1 id="Define-the-service-registration-and-subscription-methods"><a href="#Define-the-service-registration-and-subscription-methods" class="headerlink" title="Define the service registration and subscription methods"></a>Define the service registration and subscription methods</h1><p>In this step, we will need to define some interfaces that gateway items use to connect to Nacos, the registry, to implement the interfaces that will link our items to the registry. To register a service to the registry, we need to initialize, register, unregister, and subscribe to the service, i.e., we need to write the following interface to provide such an interface and implement the interface methods in the specific registry instance later.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">RegisterCenter</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">init</span><span class="params">(String registerAddress, String env)</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">register</span><span class="params">(ServiceDefinition serviceDefinition, ServiceInstance serviceInstance)</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">deregister</span><span class="params">(ServiceDefinition serviceDefinition, ServiceInstance serviceInstance)</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">subscribeAllServices</span><span class="params">(RegisterCenterListener registerCenterListener)</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After implementing the interface, we need to provide a method that listens for configuration changes in the registry. This is a particularly important feature of Nacos as a registry and configuration center, and the interface definition is as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">RegisterCenterListener</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">onChange</span><span class="params">(ServiceDefinition serviceDefinition,</span></span><br><span class="line"><span class="params">                  Set serviceInstanceSet)</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Service-Information-Loading-and-Configuration"><a href="#Service-Information-Loading-and-Configuration" class="headerlink" title="Service Information Loading and Configuration"></a>Service Information Loading and Configuration</h1><p>Based on the above service registration and subscription interfaces, we can roughly write out how to register our gateway to Nacos. Of course, we don’t have a specific implementation of how to register with the Nacos registry, but we can write out a general way to call it.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">Bootstrap</span></span><br><span class="line">&#123;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title function_">main</span><span class="params">( String[] args )</span></span><br><span class="line">    &#123;</span><br><span class="line"></span><br><span class="line">        <span class="type">Config</span> <span class="variable">config</span> <span class="operator">=</span> ConfigLoader.getInstance().load(args);</span><br><span class="line">        System.out.println(config.getPort());</span><br><span class="line"></span><br><span class="line">        <span class="type">Container</span> <span class="variable">container</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Container</span>(config);</span><br><span class="line">        container.start();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">final</span> <span class="type">RegisterCenter</span> <span class="variable">registerCenter</span> <span class="operator">=</span> registerAndSubscribe(config);</span><br><span class="line"></span><br><span class="line">        Runtime.getRuntime().addShutdownHook(<span class="keyword">new</span> <span class="title class_">Thread</span>()&#123;</span><br><span class="line"></span><br><span class="line">            <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">run</span><span class="params">()</span>&#123;</span><br><span class="line">                registerCenter.deregister(</span><br><span class="line">                        buildGatewayServiceDefinition(config),</span><br><span class="line">                        buildGatewayServiceInstance(config));</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">static</span> RegisterCenter <span class="title function_">registerAndSubscribe</span><span class="params">(Config config)</span> &#123;</span><br><span class="line"></span><br><span class="line">        <span class="type">ServiceLoader</span> <span class="variable">serviceLoader</span> <span class="operator">=</span> ServiceLoader.load(RegisterCenter.class);</span><br><span class="line">        <span class="keyword">final</span> <span class="type">RegisterCenter</span> <span class="variable">registerCenter</span> <span class="operator">=</span> serviceLoader.findFirst().orElseThrow(() -&gt; &#123;</span><br><span class="line">            log.error(<span class="string">&quot;not found RegisterCenter impl&quot;</span>);</span><br><span class="line">            <span class="keyword">return</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(<span class="string">&quot;not found RegisterCenter impl&quot;</span>);</span><br><span class="line">        &#125;);</span><br><span class="line"></span><br><span class="line">        registerCenter.init(config.getRegistryAddress(), config.getEnv());</span><br><span class="line"></span><br><span class="line">        <span class="type">ServiceDefinition</span> <span class="variable">serviceDefinition</span> <span class="operator">=</span> buildGatewayServiceDefinition(config);</span><br><span class="line">        <span class="type">ServiceInstance</span> <span class="variable">serviceInstance</span> <span class="operator">=</span> buildGatewayServiceInstance(config);</span><br><span class="line"></span><br><span class="line">        registerCenter.register(serviceDefinition, serviceInstance);</span><br><span class="line"></span><br><span class="line">        registerCenter.subscribeAllServices(<span class="keyword">new</span> <span class="title class_">RegisterCenterListener</span>() &#123;</span><br><span class="line"></span><br><span class="line">            <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">onChange</span><span class="params">(ServiceDefinition serviceDefinition, Set serviceInstanceSet)</span> &#123;</span><br><span class="line">                log.info(<span class="string">&quot;refresh service and instance: &#123;&#125; &#123;&#125;&quot;</span>, serviceDefinition.getId(),</span><br><span class="line">                        JSON.toJSON(serviceInstanceSet));</span><br><span class="line">                <span class="type">DynamicConfigManager</span> <span class="variable">manager</span> <span class="operator">=</span> DynamicConfigManager.getInstance();</span><br><span class="line">                manager.addServiceInstance(serviceDefinition.getId(), serviceInstanceSet);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;);</span><br><span class="line">        <span class="keyword">return</span> registerCenter;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">static</span> ServiceInstance <span class="title function_">buildGatewayServiceInstance</span><span class="params">(Config config)</span> &#123;</span><br><span class="line">        <span class="type">String</span> <span class="variable">localIp</span> <span class="operator">=</span> NetUtils.getLocalIp();</span><br><span class="line">        <span class="type">int</span> <span class="variable">port</span> <span class="operator">=</span> config.getPort();</span><br><span class="line">        <span class="type">ServiceInstance</span> <span class="variable">serviceInstance</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ServiceInstance</span>();</span><br><span class="line">        serviceInstance.setServiceInstanceId(localIp + COLON_SEPARATOR + port);</span><br><span class="line">        serviceInstance.setIp(localIp);</span><br><span class="line">        serviceInstance.setPort(port);</span><br><span class="line">        serviceInstance.setRegisterTime(TimeUtil.currentTimeMillis());</span><br><span class="line">        <span class="keyword">return</span> serviceInstance;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">static</span> ServiceDefinition <span class="title function_">buildGatewayServiceDefinition</span><span class="params">(Config config)</span> &#123;</span><br><span class="line">        <span class="type">ServiceDefinition</span> <span class="variable">serviceDefinition</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ServiceDefinition</span>();</span><br><span class="line">        serviceDefinition.setInvokerMap(Map.of());</span><br><span class="line">        serviceDefinition.setId(config.getApplicationName());</span><br><span class="line">        serviceDefinition.setServiceId(config.getApplicationName());</span><br><span class="line">        serviceDefinition.setEnvType(config.getEnv());</span><br><span class="line">        <span class="keyword">return</span> serviceDefinition;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>One of the more important ones is this line of code, which is to load the service provider</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ServiceLoader.load(RegisterCenter.class)</span><br></pre></td></tr></table></figure><p>ServiceLoader is a tool for loading service providers in Java, often used to implement a service provider framework. Its role is to find and load service provider implementation classes for specified interfaces or abstract classes that are dynamically registered into the system at runtime so that other components or applications can use their functionality.</p><p>Specifically, here is what it does and how it is used:</p><ul><li>Service Interface Definition: First, you need to define a service interface or abstract class, which is an abstract description of the different implementations you want. In your example, RegisterCenter.class seems to be a service interface.</li><li>Service Provider Implementation: Different modules or libraries can provide different implementations of the service interface, these implementation classes can be developed independently of the application and can be loaded at runtime.</li><li>Service provider registration: Each service provider implementation class requires a file to be created in the META-INF&#x2F;services directory with the fully qualified name of the service interface and the fully qualified name of the service provider implementation class. This tells the Java runtime which classes implement the service interface.</li><li>Load service providers: Using ServiceLoader.load(RegisterCenter.class), you can load all registered service provider implementation classes. This returns a ServiceLoader object that you can iterate over to get instances of all loaded implementation classes.</li></ul><p>This mechanism allows applications to dynamically switch between and use different service provider implementations without modifying the source code, thus increasing the scalability and flexibility of the application. It is commonly used in frameworks and libraries to allow developers to plug in their own implementations such as database drivers, loggers, plug-ins, etc.</p><h1 id="Implement-registering-gateways-to-the-registry"><a href="#Implement-registering-gateways-to-the-registry" class="headerlink" title="Implement registering gateways to the registry"></a>Implement registering gateways to the registry</h1><p>To register the gateway to the registry, we first need to introduce the Nacos client dependency.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">com.alibaba.nacos</span><br><span class="line">nacos-client</span><br><span class="line"><span class="number">2.0</span><span class="number">.4</span></span><br><span class="line"></span><br><span class="line">blossom.project</span><br><span class="line">BlossomGateway-Register-Center-Api</span><br><span class="line"><span class="number">1.0</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After that, we can register the service using the service registration method provided in the Nacos client. The way is as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NacosRegisterCenter</span> <span class="keyword">implements</span> <span class="title class_">RegisterCenter</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> String registerAddress;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> String env;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> NamingService namingService;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> NamingMaintainService namingMaintainService;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">List</span> <span class="variable">registerCenterListenerList</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">CopyOnWriteArrayList</span>&lt;&gt;();</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">init</span><span class="params">(String registerAddress, String env)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.registerAddress = registerAddress;</span><br><span class="line">        <span class="built_in">this</span>.env = env;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">try</span> &#123;</span><br><span class="line">            <span class="built_in">this</span>.namingMaintainService = NamingMaintainFactory.createMaintainService(registerAddress);</span><br><span class="line">            <span class="built_in">this</span>.namingService = NamingFactory.createNamingService(registerAddress);</span><br><span class="line">        &#125; <span class="keyword">catch</span> (NacosException e) &#123;</span><br><span class="line">            <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">register</span><span class="params">(ServiceDefinition serviceDefinition, ServiceInstance serviceInstance)</span> &#123;</span><br><span class="line">        <span class="keyword">try</span> &#123;</span><br><span class="line"></span><br><span class="line">            <span class="type">Instance</span> <span class="variable">nacosInstance</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Instance</span>();</span><br><span class="line">            nacosInstance.setInstanceId(serviceInstance.getServiceInstanceId());</span><br><span class="line">            nacosInstance.setPort(serviceInstance.getPort());</span><br><span class="line">            nacosInstance.setIp(serviceInstance.getIp());</span><br><span class="line"></span><br><span class="line">            nacosInstance.setMetadata(Map.of(GatewayConst.META_DATA_KEY, JSON.toJSONString(serviceInstance)));</span><br><span class="line"></span><br><span class="line">            namingService.registerInstance(serviceDefinition.getServiceId(), env, nacosInstance);</span><br><span class="line"></span><br><span class="line">            namingMaintainService.updateService(serviceDefinition.getServiceId(), env, <span class="number">0</span>,</span><br><span class="line">                    Map.of(GatewayConst.META_DATA_KEY, JSON.toJSONString(serviceDefinition)));</span><br><span class="line"></span><br><span class="line">            log.info(<span class="string">&quot;register &#123;&#125; &#123;&#125;&quot;</span>, serviceDefinition, serviceInstance);</span><br><span class="line">        &#125; <span class="keyword">catch</span> (NacosException e) &#123;</span><br><span class="line">            <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Here you need to have an understanding of the Nacos source code in order for you to understand how to register a service instance to the Nacos registry, which on its side asks for information such as the service’s ip, port, and service name. After completing this step, we have probably successfully registered the service with Nacos.</p><h1 id="Implementing-Service-Subscription"><a href="#Implementing-Service-Subscription" class="headerlink" title="Implementing Service Subscription"></a>Implementing Service Subscription</h1><p>Here we start to realize the service subscription, in order to realize the service subscription, we need to pull all the information of the service above Nacos, and the service information will be constantly updated, so we also need to use a timed task to constantly update our service subscription information. To subscribe to Nacos service information, we need to use the Nacos event listener, NamingEvent. In the Nacos registry, NamingEvent is an event object that represents events related to the service namespace (Naming). NamingEvent is used to listen for and handle changes to the Service Instance in the namespace so that the application can dynamically update the list of Service Instances based on these changes to stay in sync with the registry.</p><p>Specifically, NamingEvent is mainly used for the following purposes:</p><ul><li>Listening for changes to Service Instances: The Nacos registry can contain a large number of Service Instances that may change as services come online, go offline, changes to instance metadata, etc. NamingEvent allows applications to register listeners to be notified when changes to Service Instances occur.</li><li>Dynamically update the service instance list: By listening to NamingEvent, applications can get real-time status changes about service instances, so they can update the list of service instances they maintain in a timely manner to ensure that they are using the most up-to-date information about the service instances.</li><li>Implementing Load Balancing: Applications can implement a load balancing policy based on the information provided by NamingEvent, such as selecting appropriate service instances to serve service requests. The load balancing policy can be adjusted based on the availability, health state, and other metadata of the service instances.</li><li>Dynamic routing: Some applications may need to implement dynamic routing, where routing rules are dynamically updated based on changes to service instances to ensure that requests are correctly routed to available service instances.</li></ul><p>The rough code implementation is as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">subscribeAllServices</span><span class="params">(RegisterCenterListener registerCenterListener)</span> &#123;</span><br><span class="line"></span><br><span class="line">    registerCenterListenerList.add(registerCenterListener);</span><br><span class="line"></span><br><span class="line">    doSubscribeAllServices();</span><br><span class="line"></span><br><span class="line">    <span class="type">ScheduledExecutorService</span> <span class="variable">scheduledThreadPool</span> <span class="operator">=</span> Executors.newScheduledThreadPool(<span class="number">1</span>, <span class="keyword">new</span> <span class="title class_">NameThreadFactory</span>(</span><br><span class="line">            <span class="string">&quot;doSubscribeAllServices&quot;</span>));</span><br><span class="line"></span><br><span class="line">    scheduledThreadPool.scheduleWithFixedDelay(() -&gt; doSubscribeAllServices(), <span class="number">10</span>, <span class="number">10</span>, TimeUnit.SECONDS);</span><br><span class="line"></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">doSubscribeAllServices</span><span class="params">()</span> &#123;</span><br><span class="line">    <span class="keyword">try</span> &#123;</span><br><span class="line"></span><br><span class="line">        <span class="type">Set</span> <span class="variable">subscribeService</span> <span class="operator">=</span></span><br><span class="line">                namingService.getSubscribeServices().stream().map(ServiceInfo::getName).collect(Collectors.toSet());</span><br><span class="line"></span><br><span class="line">        <span class="type">int</span> <span class="variable">pageNo</span> <span class="operator">=</span> <span class="number">1</span>;</span><br><span class="line">        <span class="type">int</span> <span class="variable">pageSize</span> <span class="operator">=</span> <span class="number">100</span>;</span><br><span class="line"></span><br><span class="line">        <span class="type">List</span> <span class="variable">serviseList</span> <span class="operator">=</span> namingService.getServicesOfServer(pageNo, pageSize, env).getData();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">while</span> (CollectionUtils.isNotEmpty(serviseList)) &#123;</span><br><span class="line">            log.info(<span class="string">&quot;service list size &#123;&#125;&quot;</span>, serviseList.size());</span><br><span class="line"></span><br><span class="line">            <span class="keyword">for</span> (String service : serviseList) &#123;</span><br><span class="line"></span><br><span class="line">                <span class="keyword">if</span> (subscribeService.contains(service)) &#123;</span><br><span class="line">                    <span class="keyword">continue</span>;</span><br><span class="line">                &#125;</span><br><span class="line"></span><br><span class="line">                <span class="type">EventListener</span> <span class="variable">eventListener</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">NacosRegisterListener</span>();</span><br><span class="line">                eventListener.onEvent(<span class="keyword">new</span> <span class="title class_">NamingEvent</span>(service, <span class="literal">null</span>));</span><br><span class="line">                namingService.subscribe(service, env, eventListener);</span><br><span class="line">                log.info(<span class="string">&quot;subscribe &#123;&#125; &#123;&#125;&quot;</span>, service, env);</span><br><span class="line">            &#125;</span><br><span class="line"></span><br><span class="line">            serviseList = namingService.getServicesOfServer(++pageNo, pageSize, env).getData();</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line">    &#125; <span class="keyword">catch</span> (NacosException e) &#123;</span><br><span class="line">        <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NacosRegisterListener</span> <span class="keyword">implements</span> <span class="title class_">EventListener</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">onEvent</span><span class="params">(Event event)</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (event <span class="keyword">instanceof</span> NamingEvent) &#123;</span><br><span class="line">            log.info(<span class="string">&quot;the triggered event info is：&#123;&#125;&quot;</span>,JSON.toJSON(event));</span><br><span class="line">            <span class="type">NamingEvent</span> <span class="variable">namingEvent</span> <span class="operator">=</span> (NamingEvent) event;</span><br><span class="line">            <span class="type">String</span> <span class="variable">serviceName</span> <span class="operator">=</span> namingEvent.getServiceName();</span><br><span class="line"></span><br><span class="line">            <span class="keyword">try</span> &#123;</span><br><span class="line"></span><br><span class="line">                <span class="type">Service</span> <span class="variable">service</span> <span class="operator">=</span> namingMaintainService.queryService(serviceName, env);</span><br><span class="line">                <span class="type">ServiceDefinition</span> <span class="variable">serviceDefinition</span> <span class="operator">=</span></span><br><span class="line">                        JSON.parseObject(service.getMetadata().get(GatewayConst.META_DATA_KEY),</span><br><span class="line">                                ServiceDefinition.class);</span><br><span class="line"></span><br><span class="line">                <span class="type">List</span> <span class="variable">allInstances</span> <span class="operator">=</span> namingService.getAllInstances(service.getName(), env);</span><br><span class="line">                <span class="type">Set</span> <span class="variable">set</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">HashSet</span>&lt;&gt;();</span><br><span class="line"></span><br><span class="line">                <span class="keyword">for</span> (Instance instance : allInstances) &#123;</span><br><span class="line">                    <span class="type">ServiceInstance</span> <span class="variable">serviceInstance</span> <span class="operator">=</span></span><br><span class="line">                            JSON.parseObject(instance.getMetadata().get(GatewayConst.META_DATA_KEY),</span><br><span class="line">                                    ServiceInstance.class);</span><br><span class="line">                    set.add(serviceInstance);</span><br><span class="line">                &#125;</span><br><span class="line"></span><br><span class="line">                registerCenterListenerList.stream().forEach(l -&gt; l.onChange(serviceDefinition, set));</span><br><span class="line">            &#125; <span class="keyword">catch</span> (NacosException e) &#123;</span><br><span class="line">                <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>At this point, we are done pulling in the latest configuration information at a time when information changes in the Nacos registry. That is, we are done subscribing to the registry.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Implementing fusion degradation based on Hystrix</title>
    <link href="https://www.nablepart.com/844e05489106/"/>
    <id>https://www.nablepart.com/844e05489106/</id>
    <published>2023-11-05T11:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>Above we have successfully implemented request retry and request flow limitation, next we start to implement meltdown and service degradation. Meltdown and service degradation, designed in SpringCloud is our hystrix, here we will also consider with hystrix to realize meltdown and service degradation. If you don’t know about hystix, you can have a look at it first.</p><h1 id="Dependency-introduction"><a href="#Dependency-introduction" class="headerlink" title="Dependency introduction"></a>Dependency introduction</h1><p>Since I’m using hystrix-based meltdown demotion here, I first need to integrate hystrix with the service</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="number">1.5</span>.<span class="number">12</span></span><br><span class="line">       <span class="number">1.5</span>.<span class="number">12</span></span><br><span class="line">       <span class="number">1.5</span>.<span class="number">12</span></span><br><span class="line"></span><br><span class="line">           com.netflix.hystrix</span><br><span class="line">           hystrix-core</span><br><span class="line">           $&#123;hystrix.core.version&#125;</span><br><span class="line"></span><br><span class="line">           com.netflix.hystrix</span><br><span class="line">           hystrix-metrics-event-stream</span><br><span class="line">           $&#123;hystrix.metrics.version&#125;</span><br><span class="line"></span><br><span class="line">           com.netflix.hystrix</span><br><span class="line">           hystrix-javanica</span><br><span class="line">           $&#123;hystrix.javanica.version&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After introducing the dependencies as above, we can start writing how to do fuse limiting based on hystrix. I’ll start by posting a set of code to give a general overview of how to implement fuse degradation using hystrix.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> com.netflix.hystrix.HystrixCommand;</span><br><span class="line"><span class="keyword">import</span> com.netflix.hystrix.HystrixCommandGroupKey;</span><br><span class="line"><span class="keyword">import</span> com.netflix.hystrix.HystrixCommandKey;</span><br><span class="line"><span class="keyword">import</span> com.netflix.hystrix.HystrixCommandProperties;</span><br><span class="line"><span class="keyword">import</span> com.netflix.hystrix.HystrixThreadPoolProperties;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">MyHystrixCommand</span> <span class="keyword">extends</span> <span class="title class_">HystrixCommand</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> String fallbackValue;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">protected</span> <span class="title function_">MyHystrixCommand</span><span class="params">(String fallbackValue)</span> &#123;</span><br><span class="line">        <span class="built_in">super</span>(Setter</span><br><span class="line">            .withGroupKey(HystrixCommandGroupKey.Factory.asKey(<span class="string">&quot;MyGroup&quot;</span>))</span><br><span class="line">            .andCommandKey(HystrixCommandKey.Factory.asKey(<span class="string">&quot;MyCommand&quot;</span>))</span><br><span class="line">            .andCommandPropertiesDefaults(HystrixCommandProperties.Setter()</span><br><span class="line">                .withExecutionTimeoutInMilliseconds(<span class="number">1000</span>))</span><br><span class="line">            .andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()</span><br><span class="line">                .withCoreSize(<span class="number">10</span>)</span><br><span class="line">            );</span><br><span class="line">        <span class="built_in">this</span>.fallbackValue = fallbackValue;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">protected</span> String <span class="title function_">run</span><span class="params">()</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">return</span> <span class="string">&quot;Result of the actual operation&quot;</span>;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">protected</span> String <span class="title function_">getFallback</span><span class="params">()</span> &#123;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">return</span> fallbackValue;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>In the above code, I have created a custom MyHystrixCommand class, inherited from HystrixCommand class. In the constructor of this class, you can configure some properties of the Hystrix, such as group, command name, execution timeout, and so on. Then, you need to implement the run method to execute the actual business logic and the getFallback method to execute the demotion logic.</p><p>Next, you can use this custom Hystrix command in your application:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="type">String</span> <span class="variable">result</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">MyHystrixCommand</span>(<span class="string">&quot;Fallback Value&quot;</span>).execute();</span><br></pre></td></tr></table></figure><p>By calling the execute method, you can execute Hystrix commands and if a meltdown occurs, it will execute the degradation logic and return the degraded value.</p><p>This approach allows you to control the behavior of meltdown and degradation at a finer granularity, but you need to manually configure Hystrix properties, such as timeout, thread pool size, etc. You can customize it according to your specific needs. You can customize it to suit your specific needs. So, based on the above, we’ve got a rough idea of how to implement a set of logic for meltdown downgrade based on hystrix.</p><h1 id="Service-Degradation"><a href="#Service-Degradation" class="headerlink" title="Service Degradation"></a>Service Degradation</h1><p>So next let’s write the specific implementation code. First, we need to add out the configuration of hystrix in the configuration center.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="string">&quot;hystrixConfigs&quot;</span>:[&#123;</span><br><span class="line">                <span class="string">&quot;path&quot;</span>:<span class="string">&quot;/http-server/ping&quot;</span>,</span><br><span class="line">                <span class="string">&quot;timeoutInMilliseconds&quot;</span>:<span class="number">5000</span>,</span><br><span class="line">                <span class="string">&quot;threadCoreSize&quot;</span>:<span class="number">2</span>,</span><br><span class="line">                <span class="string">&quot;fallbackResponse&quot;</span>:<span class="string">&quot;熔断超时&quot;</span></span><br><span class="line">            &#125;]</span><br></pre></td></tr></table></figure><p>After that, we know that our specific execution logic goes through the filter, so we need to add additional configuration to hystrix here in our route filter to monitor when we finally forward a request, and if this request fails to be processed or times out, to allow him to perform the meltdown degradation logic.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">doFilter</span><span class="params">(GatewayContext gatewayContext)</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line"></span><br><span class="line">    <span class="type">Optional</span> <span class="variable">hystrixConfig</span> <span class="operator">=</span> getHystrixConfig(gatewayContext);</span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span> (hystrixConfig.isPresent()) &#123;</span><br><span class="line">        routeWithHystrix(gatewayContext, hystrixConfig);</span><br><span class="line">    &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">        route(gatewayContext, hystrixConfig);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">private</span> <span class="keyword">static</span> Optional <span class="title function_">getHystrixConfig</span><span class="params">(GatewayContext gatewayContext)</span> &#123;</span><br><span class="line">    <span class="type">Rule</span> <span class="variable">rule</span> <span class="operator">=</span> gatewayContext.getRule();</span><br><span class="line">    <span class="type">Optional</span> <span class="variable">hystrixConfig</span> <span class="operator">=</span></span><br><span class="line">            rule.getHystrixConfigs().stream().filter(c -&gt; StringUtils.equals(c.getPath(),</span><br><span class="line">                    gatewayContext.getRequest().getPath())).findFirst();</span><br><span class="line">    <span class="keyword">return</span> hystrixConfig;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>As you can see, my code above gets the configuration of the hystrix from the configuration center, and then determines if there is a fused downgrade configuration and goes to the fused downgrade logic. Instead of changing the original route logic when there is no fuse downgrade, we need to create an additional method that will be used when there is a fuse downgrade logic. Here we follow the code writing method we mentioned in the previous section to write and configure the code.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">routeWithHystrix</span><span class="params">(GatewayContext gatewayContext, Optional hystrixConfig)</span> &#123;</span><br><span class="line"></span><br><span class="line">    HystrixCommand.<span class="type">Setter</span> <span class="variable">setter</span> <span class="operator">=</span></span><br><span class="line">            HystrixCommand.Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey(gatewayContext.getUniqueId()))</span><br><span class="line">                    .andCommandKey(HystrixCommandKey.Factory.asKey(gatewayContext.getRequest().getPath()))</span><br><span class="line"></span><br><span class="line">                    .andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()</span><br><span class="line">                            .withCoreSize(hystrixConfig.get().getThreadCoreSize()))</span><br><span class="line">                    .andCommandPropertiesDefaults(HystrixCommandProperties.Setter()</span><br><span class="line"></span><br><span class="line">                            .withExecutionIsolationStrategy(HystrixCommandProperties.ExecutionIsolationStrategy.THREAD)</span><br><span class="line"></span><br><span class="line">                            .withExecutionTimeoutInMilliseconds(hystrixConfig.get().getTimeoutInMilliseconds())</span><br><span class="line">                            .withExecutionIsolationThreadInterruptOnTimeout(<span class="literal">true</span>)</span><br><span class="line">                            .withExecutionTimeoutEnabled(<span class="literal">true</span>));</span><br><span class="line"></span><br><span class="line">    <span class="keyword">new</span> <span class="title class_">HystrixCommand</span>(setter) &#123;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">protected</span> Object <span class="title function_">run</span><span class="params">()</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line"></span><br><span class="line">            route(gatewayContext, hystrixConfig).get();</span><br><span class="line">            <span class="keyword">return</span> <span class="literal">null</span>;</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">protected</span> Object <span class="title function_">getFallback</span><span class="params">()</span> &#123;</span><br><span class="line"></span><br><span class="line">            gatewayContext.setResponse(hystrixConfig.get().getFallbackResponse());</span><br><span class="line">            gatewayContext.written();</span><br><span class="line">            <span class="keyword">return</span> <span class="literal">null</span>;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;.execute();</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Demo"><a href="#Demo" class="headerlink" title="Demo"></a>Demo</h1><p>Once the above code is written, we have completed the fuse downgrade. Let’s take a look at how this works. Start the backend service and let it block for a long time. Then a timeout exception is triggered, and the fusion downgrade occurs.<img src="https://s2.loli.net/2023/11/05/SJaEQse56bTHwpm.webp"></p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Implementation of retry and flow limiting</title>
    <link href="https://www.nablepart.com/91e9927a0abc/"/>
    <id>https://www.nablepart.com/91e9927a0abc/</id>
    <published>2023-11-05T10:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>The above has already covered how to design a highly available and stable gateway, so here are two of the more common ways to implement it.</p><h1 id="Retries"><a href="#Retries" class="headerlink" title="Retries"></a>Retries</h1><p>In this case, I’m going to do a request retry for IO exceptions and request timeouts. First, we’ll add a retry function to the route filter that will retry the request in case of exceptions like the two above. Of course, we need to add some additional configuration parameters to set the number of retries and other information.</p><p><img src="https://s2.loli.net/2023/11/05/V72Zx1GXlOErW3v.webp"> And the retry code, in fact, calls the doFilter method again to execute the logic in the route filter</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">private void doRetry(GatewayContext gatewayContext,int retryTimes)&#123;</span><br><span class="line">       System<span class="selector-class">.out</span><span class="selector-class">.println</span>(&quot;当前重试次数为&quot;+retryTimes);</span><br><span class="line">       gatewayContext<span class="selector-class">.setCurrentRetryTimes</span>(retryTimes+<span class="number">1</span>);</span><br><span class="line">       try &#123;</span><br><span class="line">           doFilter(gatewayContext);</span><br><span class="line">       &#125; catch (Exception e) &#123;</span><br><span class="line">           throw new RuntimeException(e);</span><br><span class="line">       &#125;</span><br><span class="line">   &#125;</span><br></pre></td></tr></table></figure><p>Finally, we have our service request the backend service and set a long blocking sleep where the backend service is.<img src="https://s2.loli.net/2023/11/05/FPcA8LywVoCEpRJ.webp"> The simple implementation of retries is relatively straightforward, provided of course that you understand all the previous code. Or at least understand the request forwarding piece of code</p><h1 id="Flow-limiting"><a href="#Flow-limiting" class="headerlink" title="Flow limiting"></a>Flow limiting</h1><p>Common algorithms for limiting flow are the token bucket algorithm and the leaky bucket algorithm. Here we can consider using both algorithms. At the same time, we need to configure the flow restriction rules in the configuration center. For example, the path or service to limit the flow. Also, depending on whether your service is a distributed service or a monolithic service, you also need to consider using different ways to store information. For example, if it is a distributed service, you need to use Redis, while if it is a monolithic, then consider using a local cache can be used, such as Guava or Caffeine. As usual, we first write an interface, which is used to get the corresponding flow restriction filter.</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">public interface GatewayFlowControlRule &#123;</span><br><span class="line"></span><br><span class="line">    void doFlowControlFilter(Rule<span class="selector-class">.FlowControlConfig</span> flowControlConfig, String serviceId);</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After that, we start writing a flow limiting filter to get the corresponding flow limiting rules based on the request.</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">@Slf4j</span></span><br><span class="line">@FilterAspect(id=FLOW_CTL_FILTER_ID,</span><br><span class="line">        name = FLOW_CTL_FILTER_NAME,</span><br><span class="line">        order = FLOW_CTL_FILTER_ORDER)</span><br><span class="line">public class FlowControlFilter implements Filter &#123;</span><br><span class="line">    <span class="keyword">@Override</span></span><br><span class="line">    public void doFilter(GatewayContext ctx) throws Exception &#123;</span><br><span class="line">        Rule rule = ctx<span class="selector-class">.getRule</span>();</span><br><span class="line">        if(rule != null)&#123;</span><br><span class="line">            //获取流控规则</span><br><span class="line">            Set<span class="selector-class">.FlowControlConfig</span>&gt; flowControlConfigs = rule<span class="selector-class">.getFlowControlConfigs</span>();</span><br><span class="line">            Iterator iterator = flowControlConfigs<span class="selector-class">.iterator</span>();</span><br><span class="line">            Rule<span class="selector-class">.FlowControlConfig</span> flowControlConfig;</span><br><span class="line">            while (iterator<span class="selector-class">.hasNext</span>())&#123;</span><br><span class="line">                GatewayFlowControlRule flowControlRule = null;</span><br><span class="line">                flowControlConfig = (Rule<span class="selector-class">.FlowControlConfig</span>)iterator<span class="selector-class">.next</span>();</span><br><span class="line">                if(flowControlConfig == null)&#123;</span><br><span class="line">                    continue;</span><br><span class="line">                &#125;</span><br><span class="line">                String <span class="selector-tag">path</span> = ctx<span class="selector-class">.getRequest</span>()<span class="selector-class">.getPath</span>();</span><br><span class="line">                if(flowControlConfig<span class="selector-class">.getType</span>()<span class="selector-class">.equalsIgnoreCase</span>(FLOW_CTL_TYPE_PATH)</span><br><span class="line">                        &amp;&amp; <span class="selector-tag">path</span><span class="selector-class">.equals</span>(flowControlConfig<span class="selector-class">.getValue</span>()))&#123;</span><br><span class="line">                    flowControlRule = FlowControlByPathRule<span class="selector-class">.getInstance</span>(rule<span class="selector-class">.getServiceId</span>(),<span class="selector-tag">path</span>);</span><br><span class="line">                &#125;else if(flowControlConfig<span class="selector-class">.getType</span>()<span class="selector-class">.equalsIgnoreCase</span>(FLOW_CTL_TYPE_SERVICE))&#123;</span><br><span class="line">                    //TODO 可以自己实现基于服务的流控</span><br><span class="line">                &#125;</span><br><span class="line">                if(flowControlRule != null)&#123;</span><br><span class="line">                    //执行流量控制</span><br><span class="line">                    flowControlRule<span class="selector-class">.doFlowControlFilter</span>(flowControlConfig,rule<span class="selector-class">.getServiceId</span>());</span><br><span class="line">                &#125;</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Once you’ve got the specified flow limiting rules, you can start thinking about getting down to writing how to do the specific flow limiting. For example, if we want to restrict the flow based on the path, the first information we need is the service and the request path. The first thing we need to know is the service and the request path, and we need to save the rules so that whenever a request comes in, we get the rules from the cache.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">private</span> <span class="keyword">static</span> <span class="type">ConcurrentHashMap</span> <span class="variable">servicePathMap</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ConcurrentHashMap</span>&lt;&gt;();</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">static</span> FlowControlByPathRule <span class="title function_">getInstance</span><span class="params">(String serviceId, String path)</span> &#123;</span><br><span class="line">    <span class="type">StringBuffer</span> <span class="variable">buffer</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">StringBuffer</span>();</span><br><span class="line">    <span class="type">String</span> <span class="variable">key</span> <span class="operator">=</span> buffer.append(serviceId).append(<span class="string">&quot;.&quot;</span>).append(path).toString();</span><br><span class="line">    <span class="type">FlowControlByPathRule</span> <span class="variable">flowControlByPathRule</span> <span class="operator">=</span> servicePathMap.get(key);</span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span> (flowControlByPathRule == <span class="literal">null</span>) &#123;</span><br><span class="line">        flowControlByPathRule = <span class="keyword">new</span> <span class="title class_">FlowControlByPathRule</span>(serviceId, path, <span class="keyword">new</span> <span class="title class_">RedisCountLimiter</span>(<span class="keyword">new</span> <span class="title class_">JedisUtil</span>()));</span><br><span class="line">        servicePathMap.put(key, flowControlByPathRule);</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> flowControlByPathRule;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>After obtaining the flow limiting rules, we can start to analyze how to carry out specific flow limiting methods. We get to the specified flow limit configuration, such as whether the service is distributed, whether the flow limit time and limit the number of times and so on, after the information, you can begin to write a specific flow limit code. For example, if the configuration found that the service is distributed, then use Redis, and then save the current request path and limit the number of times and other information.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">   <span class="keyword">public</span>  <span class="type">boolean</span> <span class="title function_">doFlowControl</span><span class="params">(String key,<span class="type">int</span> limit,<span class="type">int</span> expire)</span>&#123;</span><br><span class="line">       <span class="keyword">try</span> &#123;</span><br><span class="line"></span><br><span class="line">           <span class="type">Object</span> <span class="variable">object</span> <span class="operator">=</span> jedisUtil.executeScript(key,limit,expire);</span><br><span class="line">           <span class="keyword">if</span>(object == <span class="literal">null</span>)&#123;</span><br><span class="line">               <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">           &#125;</span><br><span class="line">           <span class="type">Long</span> <span class="variable">result</span> <span class="operator">=</span> Long.valueOf(object.toString());</span><br><span class="line">           <span class="keyword">if</span>(FAILED_RESULT == result)&#123;</span><br><span class="line">               <span class="keyword">return</span>  <span class="literal">false</span>;</span><br><span class="line">           &#125;</span><br><span class="line">       &#125;<span class="keyword">catch</span> (Exception e)&#123;</span><br><span class="line">           <span class="keyword">throw</span>  <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(<span class="string">&quot;分布式限流发生错误&quot;</span>);</span><br><span class="line">       &#125;</span><br><span class="line">       <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> Object <span class="title function_">executeScript</span><span class="params">(String key, <span class="type">int</span> limit, <span class="type">int</span> expire)</span>&#123;</span><br><span class="line">       <span class="type">Jedis</span> <span class="variable">jedis</span> <span class="operator">=</span> jedisPool.getJedis();</span><br><span class="line">       <span class="type">String</span> <span class="variable">lua</span> <span class="operator">=</span> buildLuaScript();</span><br><span class="line">       <span class="type">String</span> <span class="variable">scriptLoad</span> <span class="operator">=</span>jedis.scriptLoad(lua);</span><br><span class="line">       <span class="keyword">try</span> &#123;</span><br><span class="line">           <span class="type">Object</span> <span class="variable">result</span> <span class="operator">=</span> jedis.evalsha(scriptLoad, Arrays.asList(key), Arrays.asList(String.valueOf(expire), String.valueOf(limit)));</span><br><span class="line">           System.out.println(result);</span><br><span class="line">           <span class="keyword">return</span> result;</span><br><span class="line">       &#125; <span class="keyword">catch</span> (Exception e) &#123;</span><br><span class="line">           e.printStackTrace();</span><br><span class="line">       &#125; <span class="keyword">finally</span> &#123;</span><br><span class="line">           <span class="keyword">if</span> (jedis != <span class="literal">null</span>) &#123;</span><br><span class="line">               <span class="keyword">try</span> &#123;</span><br><span class="line">                   jedis.close();</span><br><span class="line">               &#125; <span class="keyword">catch</span> (Exception e) &#123;</span><br><span class="line">                   e.printStackTrace();</span><br><span class="line">               &#125;</span><br><span class="line">           &#125;</span><br><span class="line">       &#125;</span><br><span class="line">       <span class="keyword">return</span> <span class="literal">null</span>;</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">   <span class="keyword">private</span> <span class="keyword">static</span> String <span class="title function_">buildLuaScript</span><span class="params">()</span> &#123;</span><br><span class="line">       <span class="type">String</span> <span class="variable">lua</span> <span class="operator">=</span> <span class="string">&quot;local num = redis.call(&#x27;incr&#x27;, KEYS[1])\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;if tonumber(num) == 1 then\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;\tredis.call(&#x27;expire&#x27;, KEYS[1], ARGV[1])\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;\treturn 1\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;elseif tonumber(num) &gt; tonumber(ARGV[2]) then\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;\treturn 0\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;else \n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;\treturn 1\n&quot;</span> +</span><br><span class="line">               <span class="string">&quot;end\n&quot;</span>;</span><br><span class="line">       <span class="keyword">return</span> lua;</span><br><span class="line">   &#125;</span><br></pre></td></tr></table></figure><p>And if it is not a distributed project, you can consider using a local cache like Guava. The implementation is pretty much the same, as follows</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">GuavaCountLimiter</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> RateLimiter rateLimiter;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">double</span> maxPermits;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">GuavaCountLimiter</span><span class="params">(<span class="type">double</span> maxPermits)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.maxPermits = maxPermits;</span><br><span class="line">        rateLimiter = RateLimiter.create(maxPermits);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">GuavaCountLimiter</span><span class="params">(<span class="type">double</span> maxPermits, <span class="type">long</span> warmUpPeriodAsSecond)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.maxPermits = maxPermits;</span><br><span class="line">        rateLimiter = RateLimiter.create(maxPermits, warmUpPeriodAsSecond, TimeUnit.SECONDS);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="type">ConcurrentHashMap</span> <span class="variable">resourceRateLimiterMap</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ConcurrentHashMap</span>();</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> GuavaCountLimiter <span class="title function_">getInstance</span><span class="params">(String serviceId, Rule.FlowControlConfig flowControlConfig)</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (StringUtils.isEmpty(serviceId) || flowControlConfig == <span class="literal">null</span> || StringUtils.isEmpty(flowControlConfig.getValue()) || StringUtils.isEmpty(flowControlConfig.getConfig()) || StringUtils.isEmpty(flowControlConfig.getType())) &#123;</span><br><span class="line">            <span class="keyword">return</span> <span class="literal">null</span>;</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="type">StringBuffer</span> <span class="variable">buffer</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">StringBuffer</span>();</span><br><span class="line">        <span class="type">String</span> <span class="variable">key</span> <span class="operator">=</span> buffer.append(serviceId).append(<span class="string">&quot;.&quot;</span>).append(flowControlConfig.getValue()).toString();</span><br><span class="line">        <span class="type">GuavaCountLimiter</span> <span class="variable">countLimiter</span> <span class="operator">=</span> resourceRateLimiterMap.get(key);</span><br><span class="line">        <span class="keyword">if</span> (countLimiter == <span class="literal">null</span>) &#123;</span><br><span class="line"></span><br><span class="line">            <span class="type">Map</span> <span class="variable">configMap</span> <span class="operator">=</span> JSON.parseObject(flowControlConfig.getConfig(), Map.class);</span><br><span class="line"></span><br><span class="line">            <span class="keyword">if</span> (!configMap.containsKey(FLOW_CTL_LIMIT_DURATION) || !configMap.containsKey(FLOW_CTL_LIMIT_PERMITS)) &#123;</span><br><span class="line">                <span class="keyword">return</span> <span class="literal">null</span>;</span><br><span class="line">            &#125;</span><br><span class="line"></span><br><span class="line">            <span class="type">double</span> <span class="variable">permits</span> <span class="operator">=</span> configMap.get(FLOW_CTL_LIMIT_PERMITS);</span><br><span class="line">            countLimiter = <span class="keyword">new</span> <span class="title class_">GuavaCountLimiter</span>(<span class="keyword">permits</span>);</span><br><span class="line">            resourceRateLimiterMap.putIfAbsent(key, countLimiter);</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> countLimiter;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="type">boolean</span> <span class="title function_">acquire</span><span class="params">(<span class="type">int</span> <span class="keyword">permits</span>)</span> &#123;</span><br><span class="line">        <span class="type">boolean</span> <span class="variable">success</span> <span class="operator">=</span> rateLimiter.tryAcquire(<span class="keyword">permits</span>);</span><br><span class="line">        <span class="keyword">if</span> (success) &#123;</span><br><span class="line">            <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>So up to this point, the code for how to do flow limiting has been roughly implemented. After that, we can start testing our flow-limiting code once we’ve configured the configuration center information.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line">&#123;</span><br><span class="line">    <span class="string">&quot;rules&quot;</span>: [</span><br><span class="line">        &#123;</span><br><span class="line">            <span class="string">&quot;id&quot;</span>:<span class="string">&quot;1&quot;</span>,</span><br><span class="line">            <span class="string">&quot;name&quot;</span>:<span class="string">&quot;test-1&quot;</span>,</span><br><span class="line">            <span class="string">&quot;protocol&quot;</span>:<span class="string">&quot;http&quot;</span>,</span><br><span class="line">            <span class="string">&quot;serviceId&quot;</span>:<span class="string">&quot;backend-http-server&quot;</span>,</span><br><span class="line">            <span class="string">&quot;prefix&quot;</span>:<span class="string">&quot;/user&quot;</span>,</span><br><span class="line">            <span class="string">&quot;paths&quot;</span>:[</span><br><span class="line">                <span class="string">&quot;/http-server/ping&quot;</span>,<span class="string">&quot;/user/update&quot;</span></span><br><span class="line">            ],</span><br><span class="line">            <span class="string">&quot;filterConfigs&quot;</span>:[&#123;</span><br><span class="line">                    <span class="string">&quot;id&quot;</span>:<span class="string">&quot;load_balance_filter&quot;</span>,</span><br><span class="line">                    <span class="string">&quot;config&quot;</span>:&#123;</span><br><span class="line">                        <span class="string">&quot;load_balance&quot;</span>:<span class="string">&quot;Random&quot;</span></span><br><span class="line">                    &#125;</span><br><span class="line">                &#125;,&#123;</span><br><span class="line">                    <span class="string">&quot;id&quot;</span>:<span class="string">&quot;flow_ctl_filter&quot;</span></span><br><span class="line">            &#125;],</span><br><span class="line">            <span class="string">&quot;flowControlConfigs&quot;</span>:[&#123;</span><br><span class="line">                <span class="string">&quot;type&quot;</span>:<span class="string">&quot;path&quot;</span>,</span><br><span class="line">                <span class="string">&quot;model&quot;</span>:<span class="string">&quot;distributed&quot;</span>,</span><br><span class="line">                <span class="string">&quot;value&quot;</span>:<span class="string">&quot;/http-server/ping&quot;</span>,</span><br><span class="line">                <span class="string">&quot;config&quot;</span>:&#123;</span><br><span class="line">                    <span class="string">&quot;duration&quot;</span>:<span class="number">20</span>,</span><br><span class="line">                    <span class="string">&quot;permits&quot;</span>:<span class="number">2</span></span><br><span class="line">                &#125;</span><br><span class="line">            &#125;],</span><br><span class="line">            <span class="string">&quot;retryConfig&quot;</span>:&#123;</span><br><span class="line">                <span class="string">&quot;times&quot;</span>:<span class="number">5</span></span><br><span class="line">            &#125;,</span><br><span class="line">            <span class="string">&quot;hystixConfigs&quot;</span>:[&#123;</span><br><span class="line">                <span class="string">&quot;path&quot;</span>:<span class="string">&quot;/http-server/ping&quot;</span>,</span><br><span class="line">                <span class="string">&quot;timeoutInMilliseconds&quot;</span>:<span class="number">5000</span>,</span><br><span class="line">                <span class="string">&quot;threadCoreSize&quot;</span>:<span class="number">2</span>,</span><br><span class="line">                <span class="string">&quot;fallbackResponse&quot;</span>:<span class="string">&quot;熔断超时&quot;</span></span><br><span class="line">            &#125;]</span><br><span class="line">        &#125;</span><br><span class="line">    ]</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>When we send an excessive number of requests using apifox, we see that the error is reported as follows<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/870b58d26417474290058db5e27e5f0e~tplv-k3u1fbpfcp-jj-mark:3024:0:0:0:q75.awebp#?w=2044&h=414&s=183734&e=png&b=fefdfd"> Up to the current position we have implemented retry and flow limiting, then in the next article we need to implement fusing and degrading. Because actually flow limiting and fusing degradation go together.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a Gateway from 0 to 1 Filter Chain Implementation - Route Forwarding Filters</title>
    <link href="https://www.nablepart.com/3bf09c95be00/"/>
    <id>https://www.nablepart.com/3bf09c95be00/</id>
    <published>2023-11-05T09:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p><a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">Link to effects demo</a></p><h1 id="Analyze"><a href="#Analyze" class="headerlink" title="Analyze"></a>Analyze</h1><p>As we know, route forwarding is the last operation to be performed after all the filtering logic is processed by the gateway, it is responsible for forwarding our request to a specified backend service instance, here we refer to the implementation of SpringCloudGateway to simulate a route forwarding filter.</p><p>A route forwarding filter in Spring Cloud is a component of the Spring Cloud Gateway (a microservices gateway) that is used to perform filtering and forwarding operations on incoming HTTP requests. These filters allow us to modify, validate, log, etc. requests before they reach the target service. Here are the main roles of route forwarding filters and why they are needed:</p><ul><li>Request modification and redirection: route forwarding filters allow you to modify various parts of the request, including the request header, request body, request parameters, etc., to suit the requirements of the target service. You can add, delete, or modify request information, and even redirect the request to a different target service, enabling dynamic request routing.</li><li>:: Security: With route forwarding filters, you can add security-related features, such as authentication and authorization, to ensure that only authorized users can access certain services. This helps protect individual services in a microservice architecture from unauthorized access.</li><li>Caching: You can use filters to enable caching of requests and responses to lighten the load on the target service, improve performance, and reduce response times. This is useful for services that handle a large number of requests.</li><li>Logging and monitoring: Route forwarding filters can also be used to log information about requests and responses for monitoring and troubleshooting purposes. You can add logging and metrics collection to the filter to understand the performance and status of requests.</li><li>Traffic control: With route forwarding filters, you can implement traffic control and flow limiting to prevent a particular service from being overwhelmed by too many requests. This helps maintain service availability and performance.</li><li>Request forwarding and load balancing: The most common uses are to forward requests to multiple destination services on the backend and to implement load balancing policies to ensure that requests are evenly distributed to different service instances.</li></ul><h1 id="Code-Implementation"><a href="#Code-Implementation" class="headerlink" title="Code Implementation"></a>Code Implementation</h1><p>A lot has been said above about why we should use route forwarding filters, so let’s now analyze the implementation of route forwarding filters. One thing we have analyzed from our early architectural design diagrams is that we are using an asynchronous way to send our http requests, and here I am using Netty with AsyncHttpClient to implement the asynchronous IO communication functionality. If you are interested in this piece you can search for yourself: Netty asynchronous IO communication model knowledge and Netty with AsyncHttpClient use.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br></pre></td><td class="code"><pre><span class="line">package blossom.project.core.netty;</span><br><span class="line"></span><br><span class="line">import blossom.project.core.Config;</span><br><span class="line">import blossom.project.core.LifeCycle;</span><br><span class="line">import blossom.project.core.helper.AsyncHttpHelper;</span><br><span class="line">import io.netty.buffer.PooledByteBufAllocator;</span><br><span class="line">import io.netty.channel.EventLoopGroup;</span><br><span class="line">import lombok.<span class="keyword">extern</span>.slf4j.Slf4j;</span><br><span class="line">import org.asynchttpclient.AsyncHttpClient;</span><br><span class="line">import org.asynchttpclient.DefaultAsyncHttpClient;</span><br><span class="line">import org.asynchttpclient.DefaultAsyncHttpClientConfig;</span><br><span class="line">import java.io.IOException;</span><br><span class="line"></span><br><span class="line">@Slf4j</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">NettyHttpClient</span> <span class="title">implements</span> <span class="title">LifeCycle</span> &#123;</span></span><br><span class="line">    private final Config config;</span><br><span class="line"></span><br><span class="line">    private final EventLoopGroup eventLoopGroupWoker;</span><br><span class="line"></span><br><span class="line">    private AsyncHttpClient asyncHttpClient;</span><br><span class="line"></span><br><span class="line">    public <span class="title function_">NettyHttpClient</span><span class="params">(Config config, EventLoopGroup eventLoopGroupWoker)</span> &#123;</span><br><span class="line">        this.config = config;</span><br><span class="line">        this.eventLoopGroupWoker = eventLoopGroupWoker;</span><br><span class="line">        init();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public <span class="type">void</span> <span class="title function_">init</span><span class="params">()</span> &#123;</span><br><span class="line">        DefaultAsyncHttpClientConfig.Builder builder = new DefaultAsyncHttpClientConfig.Builder()</span><br><span class="line">                .setEventLoopGroup(eventLoopGroupWoker)</span><br><span class="line">                .setConnectTimeout(config.getHttpConnectTimeout())</span><br><span class="line">                .setRequestTimeout(config.getHttpRequestTimeout())</span><br><span class="line">                .setMaxRedirects(config.getHttpMaxRequestRetry())</span><br><span class="line">                .setAllocator(PooledByteBufAllocator.DEFAULT)</span><br><span class="line">                .setCompressionEnforced(<span class="literal">true</span>)</span><br><span class="line">                .setMaxConnections(config.getHttpMaxConnections())</span><br><span class="line">                .setMaxConnectionsPerHost(config.getHttpConnectionsPerHost())</span><br><span class="line">                .setPooledConnectionIdleTimeout(config.getHttpPooledConnectionIdleTimeout());</span><br><span class="line">        this.asyncHttpClient = new DefaultAsyncHttpClient(builder.build());</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public <span class="type">void</span> <span class="title function_">start</span><span class="params">()</span> &#123;</span><br><span class="line">        AsyncHttpHelper.getInstance().initialized(asyncHttpClient);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public <span class="type">void</span> <span class="title function_">shutdown</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (asyncHttpClient != null) &#123;</span><br><span class="line">            try &#123;</span><br><span class="line">                this.asyncHttpClient.close();</span><br><span class="line">            &#125; catch (IOException e) &#123;</span><br><span class="line">                <span class="built_in">log</span>.error(<span class="string">&quot;NettyHttpClient shutdown error&quot;</span>, e);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Directly look at the code is actually that I integrated these two tools, the realization of the use of Netty to complete the function of asynchronous communication. And initialized our AsyncHttpClient asynchronous http request sending tool. And we have finished packaging this tool, to realize the response to the request is relatively easy, we only need to write our response content back to the response body of our request can be.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br></pre></td><td class="code"><pre><span class="line">package blossom.project.core.filter.router;</span><br><span class="line"></span><br><span class="line">import blossom.project.common.enums.ResponseCode;</span><br><span class="line">import blossom.project.common.exception.ConnectException;</span><br><span class="line">import blossom.project.common.exception.ResponseException;</span><br><span class="line">import blossom.project.core.ConfigLoader;</span><br><span class="line">import blossom.project.core.context.GatewayContext;</span><br><span class="line">import blossom.project.core.filter.Filter;</span><br><span class="line">import blossom.project.core.filter.FilterAspect;</span><br><span class="line">import blossom.project.core.helper.AsyncHttpHelper;</span><br><span class="line">import blossom.project.core.helper.ResponseHelper;</span><br><span class="line">import blossom.project.core.response.GatewayResponse;</span><br><span class="line">import lombok.<span class="keyword">extern</span>.slf4j.Slf4j;</span><br><span class="line">import org.asynchttpclient.Request;</span><br><span class="line">import org.asynchttpclient.Response;</span><br><span class="line"></span><br><span class="line">import java.util.Objects;</span><br><span class="line">import java.util.concurrent.CompletableFuture;</span><br><span class="line">import java.util.concurrent.TimeoutException;</span><br><span class="line"></span><br><span class="line">import <span class="type">static</span> blossom.project.common.constant.FilterConst.*;</span><br><span class="line"></span><br><span class="line">@Slf4j</span><br><span class="line">@FilterAspect(id=ROUTER_FILTER_ID,</span><br><span class="line">        name = ROUTER_FILTER_NAME,</span><br><span class="line">        order = ROUTER_FILTER_ORDER)</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">RouterFilter</span> <span class="title">implements</span> <span class="title">Filter</span> &#123;</span></span><br><span class="line">    @Override</span><br><span class="line">    public <span class="type">void</span> <span class="title function_">doFilter</span><span class="params">(GatewayContext gatewayContext)</span> throws Exception &#123;</span><br><span class="line">        Request request = gatewayContext.getRequest().build();</span><br><span class="line">        CompletableFuture <span class="built_in">future</span> = AsyncHttpHelper.getInstance().executeRequest(request);</span><br><span class="line"></span><br><span class="line">        boolean whenComplete = ConfigLoader.getConfig().isWhenComplete();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">if</span> (whenComplete) &#123;</span><br><span class="line">            <span class="built_in">future</span>.whenComplete((response, throwable) -&gt; &#123;</span><br><span class="line">                complete(request, response, throwable, gatewayContext);</span><br><span class="line">            &#125;);</span><br><span class="line">        &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">            <span class="built_in">future</span>.whenCompleteAsync((response, throwable) -&gt; &#123;</span><br><span class="line">                complete(request, response, throwable, gatewayContext);</span><br><span class="line">            &#125;);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    private <span class="type">void</span> <span class="title function_">complete</span><span class="params">(Request request,</span></span><br><span class="line"><span class="params">                          Response response,</span></span><br><span class="line"><span class="params">                          Throwable throwable,</span></span><br><span class="line"><span class="params">                          GatewayContext gatewayContext)</span> &#123;</span><br><span class="line">        gatewayContext.releaseRequest();</span><br><span class="line"></span><br><span class="line">        try &#123;</span><br><span class="line">            <span class="keyword">if</span> (Objects.nonNull(throwable)) &#123;</span><br><span class="line">                String url = request.getUrl();</span><br><span class="line">                <span class="keyword">if</span> (throwable instanceof TimeoutException) &#123;</span><br><span class="line">                    <span class="built_in">log</span>.warn(<span class="string">&quot;complete time out &#123;&#125;&quot;</span>, url);</span><br><span class="line">                    gatewayContext.setThrowable(new ResponseException(ResponseCode.REQUEST_TIMEOUT));</span><br><span class="line">                    gatewayContext.setResponse(GatewayResponse.buildGatewayResponse(ResponseCode.REQUEST_TIMEOUT));</span><br><span class="line">                &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">                    gatewayContext.setThrowable(new ConnectException(throwable,</span><br><span class="line">                            gatewayContext.getUniqueId(),</span><br><span class="line">                            url, ResponseCode.HTTP_RESPONSE_ERROR));</span><br><span class="line">                    gatewayContext.setResponse(GatewayResponse.buildGatewayResponse(ResponseCode.HTTP_RESPONSE_ERROR));</span><br><span class="line">                &#125;</span><br><span class="line">            &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">                gatewayContext.setResponse(GatewayResponse.buildGatewayResponse(response));</span><br><span class="line">            &#125;</span><br><span class="line">        &#125; catch (Throwable t) &#123;</span><br><span class="line">            gatewayContext.setThrowable(new ResponseException(ResponseCode.INTERNAL_ERROR));</span><br><span class="line">            gatewayContext.setResponse(GatewayResponse.buildGatewayResponse(ResponseCode.INTERNAL_ERROR));</span><br><span class="line">            <span class="built_in">log</span>.error(<span class="string">&quot;complete error&quot;</span>, t);</span><br><span class="line">        &#125; finally &#123;</span><br><span class="line">            gatewayContext.written();</span><br><span class="line">            ResponseHelper.writeResponse(gatewayContext);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>The above code is still relatively easy to analyze. We first analyze the doFilter method, first of all, this method will build our request, and then use our http tools packaged asynchronous execution of the request, here we use the CompletableFuture tool to asynchronously receive the results of the request, when the request is executed after the implementation of our callback method. By default, we use the single asynchronous mode, which executes the complete method when we receive a response to a request. If the response encounters an exception, then we will catch the exception and report the error, if not, we will normally go finally to write back the response data to the front-end. At this point, we can successfully forward our request to the backend service.</p><h1 id="Effectiveness-with-Load-Balancing-Filter"><a href="#Effectiveness-with-Load-Balancing-Filter" class="headerlink" title="Effectiveness with Load Balancing Filter"></a>Effectiveness with Load Balancing Filter</h1><p>In the previous section I have implemented both random and polling load balancing filters, so let’s demonstrate the effect here. First we start two instances of the backend service and make sure they have successfully registered to the registry.<img src="https://s2.loli.net/2023/11/05/KdTEvySs7z5a6pN.webp"></p><p>Also configure the configuration file for our gateway gateway.<img src="https://s2.loli.net/2023/11/05/Y3n6md1Hf9ZP5Bt.webp"> After that, we configure our request header with the name and version of the backend service to be requested<img src="https://s2.loli.net/2023/11/05/elQwYh5VZIKxH9s.webp"> The service is sent multiple times, and here we chose polling load balancing, so you can see that the service is distributed evenly to the two back-end services at the end.</p><p><img src="https://s2.loli.net/2023/11/05/nJAepu4vFT9S1QW.webp"> <img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/c4f9f7a9d6234f81af239977c8b9df41~tplv-k3u1fbpfcp-jj-mark:3024:0:0:0:q75.awebp#?w=1740&h=210&s=78635&e=png&b=fbfbfb"></p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a Gateway from 0 to 1 Filter Chain Implementation - Implementing a Load Balancing Filter</title>
    <link href="https://www.nablepart.com/1c6f2326734d/"/>
    <id>https://www.nablepart.com/1c6f2326734d/</id>
    <published>2023-11-05T08:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p><a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">Link to effect demo</a></p><h1 id="What-is-a-filter"><a href="#What-is-a-filter" class="headerlink" title="What is a filter?"></a>What is a filter?</h1><p>In the previous sections we’ve implemented registering our gateway service to the registry and successfully pulling configurations from the configuration center. Next we’ll start implementing the core of a gateway service, which is a filter chain. A filter chain consists of multiple filters. After a filter completes its filtering process, it forwards the request to the next filter for further execution. Thus completing the processing of requests and responses. And if you understand SpringCloudGateway, you will know that the filter is divided into global and local filters. The former for all requests for processing , and local filters SpringCloud has been the default to help us achieve , of course, we can also inherit and implement their own .</p><p>Filter in accordance with the chain of request processing, if you understand the gateway project should be aware of, when all the filters request processing is complete, there will be a routing filter will be sent to the corresponding backend service request processing, that is, the request will be forwarded to the backend service when the service processing is complete, it will be returned to the request again. If an exception occurs during the filter chaining process, we can also use the filter chaining method to catch it. If the request is forwarded normally and processed, we can use context.writeAndFlush method to write the data back and return. The general flow is as follows:</p><p><img src="https://s2.loli.net/2023/11/05/KR7VrLjwict6l8y.webp"> <a href="https://blog.csdn.net/Zhangsama1/article/details/133517494?spm=1001.2014.3001.5502">You can take a brief look at Gateway in Gateway through this article</a> [SpringCLoudGateway Implementing URL encryption and digital signatures](<a href="https://link.juejin.cn/?target=https://blog.csdn.net/Zhangsama1/article/details/133522946?spm%25">https://link.juejin.cn?target=https%3A%2F%2Fblog.csdn.net%2FZhangsama1%2Farticle%2Fdetails%2F133522946%3Fspm%</a> 3D1001.2014.3001.5502 “<a href="https://blog.csdn.net/Zhangsama1/article/details/133522946?spm=1001.2014.3001.5502">https://blog.csdn.net/Zhangsama1/article/details/133522946?spm=1001.2014.3001.5502</a>“)</p><p>Having finished understanding the simple concept of filters, let’s start analyzing how to implement them. First define a Filter top-level interface following the SpringCloudGateway (hereafter collectively referred to as scg) approach. And we need to implement the Ordered interface to set the processing priority. And we also set up a cutter to enhance the filter , so that we can get some information about the filter , and at the same time to facilitate the development of pluggable , so that we can develop in accordance with the SPI way . After that, we also need to set up a factory production class, FilterFactory, to help us produce the chain table and execute it. And we also need to use the filter chain table, which is GatewayFilterChain in scg. At the same time, we also need to implement the filter chain factory by setting up a class: GatewayFilterChainFactory</p><p>So we can get, Filter as the top-level interface of the filter, its subclasses need to implement this interface and implement specific filter methods.</p><p>FilterAspect is used to provide filter AOP functionality to facilitate the management of our filters.</p><p>FilterFactory filter factory , used to build the filter chain table and provide according to the filter ID to get the filter method .</p><p>GatewayFilterChain provides specific methods for adding filters and executing filter chain processing logic.</p><p>GatewayFilterChainFactory implements FilterFactory to realize the specific method of constructing filter chain and provide the actual method of getting filters according to their IDs.</p><p>Here posted a specific code implementation: First is the filter chain class , used to store the actual filter , and provide filter execution methods .</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">GatewayFilterChain</span> &#123;</span></span><br><span class="line"></span><br><span class="line">    private List filters = new ArrayList&lt;&gt;();</span><br><span class="line"></span><br><span class="line">    public GatewayFilterChain <span class="title function_">addFilter</span><span class="params">(Filter filter)</span>&#123;</span><br><span class="line">        filters.add(filter);</span><br><span class="line">        <span class="keyword">return</span> this;</span><br><span class="line">    &#125;</span><br><span class="line">    public GatewayFilterChain <span class="title function_">addFilterList</span><span class="params">(List filter)</span>&#123;</span><br><span class="line">        filters.addAll(filter);</span><br><span class="line">        <span class="keyword">return</span> this;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    public GatewayContext <span class="title function_">doFilter</span><span class="params">(GatewayContext ctx)</span> throws Exception &#123;</span><br><span class="line">        <span class="keyword">if</span>(filters.isEmpty())&#123;</span><br><span class="line">            <span class="keyword">return</span> ctx;</span><br><span class="line">        &#125;</span><br><span class="line">        try &#123;</span><br><span class="line">            <span class="keyword">for</span>(Filter fl: filters)&#123;</span><br><span class="line">                fl.doFilter(ctx);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;catch (Exception e)&#123;</span><br><span class="line">            <span class="built_in">log</span>.error(<span class="string">&quot;执行过滤器发生异常,异常信息：&#123;&#125;&quot;</span>,e.getMessage());</span><br><span class="line">            throw e;</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> ctx;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Next, we provide the filter chain factory. The role of the filter chain factory is to store the filter configuration information, create filter chains, and provide methods to get filters. The filter configuration information comes from the configuration center we set up earlier.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br></pre></td><td class="code"><pre><span class="line">@Slf4j</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">GatewayFilterChainFactory</span> <span class="title">implements</span> <span class="title">FilterFactory</span> &#123;</span></span><br><span class="line"></span><br><span class="line">    private <span class="type">static</span> <span class="class"><span class="keyword">class</span> <span class="title">SingletonInstance</span> &#123;</span></span><br><span class="line">        private <span class="type">static</span> final GatewayFilterChainFactory INSTANCE = new GatewayFilterChainFactory();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    public <span class="type">static</span> GatewayFilterChainFactory <span class="title function_">getInstance</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> SingletonInstance.INSTANCE;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    private Map processorFilterIdMap = new ConcurrentHashMap&lt;&gt;();</span><br><span class="line"></span><br><span class="line">    public <span class="title function_">GatewayFilterChainFactory</span><span class="params">()</span> &#123;</span><br><span class="line">        ServiceLoader serviceLoader = ServiceLoader.load(Filter.class);</span><br><span class="line">        serviceLoader.stream().forEach(filterProvider -&gt; &#123;</span><br><span class="line">            Filter filter = filterProvider.get();</span><br><span class="line">            FilterAspect annotation = filter.getClass().getAnnotation(FilterAspect.class);</span><br><span class="line">            <span class="built_in">log</span>.info(<span class="string">&quot;load filter success:&#123;&#125;,&#123;&#125;,&#123;&#125;,&#123;&#125;&quot;</span>, filter.getClass(),</span><br><span class="line">                    annotation.id(), annotation.name(), annotation.order());</span><br><span class="line">            <span class="keyword">if</span> (annotation != null) &#123;</span><br><span class="line"></span><br><span class="line">                String filterId = annotation.id();</span><br><span class="line">                <span class="keyword">if</span> (StringUtils.isEmpty(filterId)) &#123;</span><br><span class="line">                    filterId = filter.getClass().getName();</span><br><span class="line">                &#125;</span><br><span class="line">                processorFilterIdMap.put(filterId, filter);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;);</span><br><span class="line"></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    public <span class="type">static</span> <span class="type">void</span> <span class="title function_">main</span><span class="params">(String[] args)</span> &#123;</span><br><span class="line">        new GatewayFilterChainFactory();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public GatewayFilterChain <span class="title function_">buildFilterChain</span><span class="params">(GatewayContext ctx)</span> throws Exception &#123;</span><br><span class="line">        GatewayFilterChain chain = new GatewayFilterChain();</span><br><span class="line">        List filters = new ArrayList&lt;&gt;();</span><br><span class="line"></span><br><span class="line">        Rule rule = ctx.getRule();</span><br><span class="line">        <span class="keyword">if</span> (rule != null) &#123;</span><br><span class="line"></span><br><span class="line">            Set filterConfigs = rule.getFilterConfigs();</span><br><span class="line">            Iterator iterator = filterConfigs.iterator();</span><br><span class="line">            Rule.FilterConfig filterConfig;</span><br><span class="line">            <span class="keyword">while</span> (iterator.hasNext()) &#123;</span><br><span class="line">                filterConfig = (Rule.FilterConfig) iterator.next();</span><br><span class="line">                <span class="keyword">if</span> (filterConfig == null) &#123;</span><br><span class="line">                    <span class="keyword">continue</span>;</span><br><span class="line">                &#125;</span><br><span class="line">                String filterId = filterConfig.getId();</span><br><span class="line">                <span class="keyword">if</span> (StringUtils.isNotEmpty(filterId) &amp;&amp; getFilterInfo(filterId) != null) &#123;</span><br><span class="line">                    Filter filter = getFilterInfo(filterId);</span><br><span class="line">                    filters.add(filter);</span><br><span class="line">                &#125;</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line">        filters.add(new RouterFilter());</span><br><span class="line"></span><br><span class="line">        filters.sort(Comparator.comparingInt(Filter::getOrder));</span><br><span class="line"></span><br><span class="line">        chain.addFilterList(filters);</span><br><span class="line">        <span class="keyword">return</span> chain;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public Filter <span class="title function_">getFilterInfo</span><span class="params">(String filterId)</span> throws Exception &#123;</span><br><span class="line">        <span class="keyword">return</span> processorFilterIdMap.get(filterId);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="Writing-load-balancing-filters"><a href="#Writing-load-balancing-filters" class="headerlink" title="Writing load balancing filters"></a>Writing load balancing filters</h2><h2 id="Definition-and-Implementation-of-Load-Balancing"><a href="#Definition-and-Implementation-of-Load-Balancing" class="headerlink" title="Definition and Implementation of Load Balancing"></a>Definition and Implementation of Load Balancing</h2><p>Before writing a load balancing filter, you need to understand what load balancing is.</p><blockquote><p>Load Balancing is a technique in computer networking and server architecture designed to distribute network requests, data streams, or loads to multiple servers or computing resources to ensure high availability, improve performance, and avoid overloading any single server or resource. Load balancing plays an important role in distributed systems and network applications by helping to cope with traffic fluctuations and providing redundancy to improve system reliability and performance.</p></blockquote><p>And for the implementation of load balancing, we have the following ways:</p><ul><li>DNS load balancing</li><li>Hardware load balancing</li><li>Software load balancing</li></ul><p>DNS Load Balancing (Geographic Level): Principle: DNS Load Balancing uses DNS servers to map domain name resolution requests to multiple different IP addresses, each of which corresponds to a load balancer or server. the DNS servers return the resolved IP addresses to the client, which then sends the request to one of the IP addresses.</p><p>Pros: Relatively simple, no additional hardware or software load balancers required, easy to implement and scale.</p><p>Cons: DNS load balancing does not have the ability to intelligently distribute traffic, dynamically adjust load, or handle server failure detection and recovery. Client-side caching of DNS records may result in uneven traffic distribution. DNS load balancing cannot detect whether a back-end service is alive or not. there may be requests for down services.</p><p>Hardware Load Balancing: Principle: Hardware load balancing is the distribution of traffic to back-end servers by means of specialized hardware devices. These hardware devices usually have performance advantages and can handle a large number of connections and requests. Search for F5 and A10 load balancers if you are interested.</p><p>Benefits: High performance, specifically designed for load balancing tasks, usually with high availability and reliability. Supports advanced load balancing algorithms and traffic management.</p><p>Cons: Relatively expensive, requiring the purchase of specialized hardware devices. Configuration and management can be complex and require specialized knowledge.</p><p>Software Load Balancing: Principle: Software load balancing is the distribution of traffic by running load balancing software on common servers. These software can be open source or commercial, such as Nginx, HAProxy, LVS, etc.</p><p>Benefits: Relatively economical, can run on common hardware, easy to deploy and manage. Offers a variety of load balancing algorithms and advanced configuration options.</p><p>Cons: Performance may be limited by server hardware and may require increased server count for extremely high traffic loads. Availability and reliability may not be as good as specialized hardware appliances.</p><p>It is recommended to learn more about the differences between Nginx and LVS load balancing.</p><p>These load balancers are not used individually, but in conjunction with each other in actual production. DNS load balancing is used for geographic load balancing 2. Hardware load balancing is used for cluster load balancing 3. Software load balancing is used for machine load balancing.</p><h2 id="Load-balancing-algorithms"><a href="#Load-balancing-algorithms" class="headerlink" title="Load balancing algorithms"></a>Load balancing algorithms</h2><p>Static load balancing algorithms: polling, ratio, priority, which is more commonly used is polling, the characteristics are as follows: 1, the order of the cycle of connecting to each server in the queue, once a server anomaly, then it will be removed from the queue. 2, Advantages: the implementation of a simple, efficient, easy to expand horizontally. 3, Disadvantages: uncertainty of the destination node of the request is not suitable for the storage of written scenarios. 4, the number of servers in the load balancing algorithm: the number of servers in the queue is not enough.</p><p>Dynamic load balancing algorithms: minimum number of connections, fastest response time, dynamic performance allocation, dynamic server supplementation, quality of service, etc. Dynamic load balancing algorithms are more commonly used dynamic performance allocation, which dynamically adjusts the traffic distribution by means of the performance parameters of the application programs and application servers collected by BIG-IP. Generally, we will work with Prometheus to realize this.</p><h2 id="Design-Implementation"><a href="#Design-Implementation" class="headerlink" title="Design Implementation"></a>Design Implementation</h2><p>The first step is to create our top-level interface, which is used to help us get the back-end service instances selected according to the load balancing policy.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">public interface LoadBalanceGatewayRule &#123;</span><br><span class="line"></span><br><span class="line">    ServiceInstance <span class="title function_">choose</span><span class="params">(GatewayContext ctx)</span>;</span><br><span class="line"></span><br><span class="line">    ServiceInstance <span class="title function_">choose</span><span class="params">(String serviceId)</span>;</span><br><span class="line"></span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>After we implement this interface, we first implement a relatively simple random load balancing strategy. The implementation is based on our service id, and then save all the service instances corresponding to the current service id, after which we can randomly return one from the service instances.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br></pre></td><td class="code"><pre><span class="line">@Slf4j</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">RandomLoadBalanceRule</span> <span class="title">implements</span> <span class="title">LoadBalanceGatewayRule</span> &#123;</span></span><br><span class="line"></span><br><span class="line">    private final String serviceId;</span><br><span class="line"></span><br><span class="line">    private Set serviceInstanceSet;</span><br><span class="line"></span><br><span class="line">    public <span class="title function_">RandomLoadBalanceRule</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        this.serviceId = serviceId;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    private <span class="type">static</span> ConcurrentHashMap serviceMap = new ConcurrentHashMap&lt;&gt;();</span><br><span class="line"></span><br><span class="line">    public <span class="type">static</span> RandomLoadBalanceRule <span class="title function_">getInstance</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        RandomLoadBalanceRule loadBalanceRule = serviceMap.get(serviceId);</span><br><span class="line">        <span class="keyword">if</span> (loadBalanceRule == null) &#123;</span><br><span class="line">            loadBalanceRule = new RandomLoadBalanceRule(serviceId);</span><br><span class="line">            serviceMap.put(serviceId, loadBalanceRule);</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> loadBalanceRule;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public ServiceInstance <span class="title function_">choose</span><span class="params">(GatewayContext ctx)</span> &#123;</span><br><span class="line">        String serviceId = ctx.getUniqueId();</span><br><span class="line">        <span class="keyword">return</span> choose(serviceId);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public ServiceInstance <span class="title function_">choose</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        Set serviceInstanceSet =</span><br><span class="line">                DynamicConfigManager.getInstance().getServiceInstanceByUniqueId(serviceId);</span><br><span class="line">        <span class="keyword">if</span> (serviceInstanceSet.isEmpty()) &#123;</span><br><span class="line">            <span class="built_in">log</span>.warn(<span class="string">&quot;No instance available for:&#123;&#125;&quot;</span>, serviceId);</span><br><span class="line">            throw new NotFoundException(SERVICE_INSTANCE_NOT_FOUND);</span><br><span class="line">        &#125;</span><br><span class="line">        List instances = new ArrayList(serviceInstanceSet);</span><br><span class="line">        <span class="type">int</span> index = ThreadLocalRandom.current().nextInt(instances.size());</span><br><span class="line">        ServiceInstance instance = (ServiceInstance) instances.get(index);</span><br><span class="line">        <span class="keyword">return</span> instance;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>For the polling load balancing strategy, we would need to maintain a global index number and then keep incrementing it each time we execute, and then take the remainder of the number of service instances to know which backend instance to execute.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br></pre></td><td class="code"><pre><span class="line">@Slf4j</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">RoundRobinLoadBalanceRule</span> <span class="title">implements</span> <span class="title">LoadBalanceGatewayRule</span> &#123;</span></span><br><span class="line"></span><br><span class="line">    private AtomicInteger position = new AtomicInteger(<span class="number">1</span>);</span><br><span class="line"></span><br><span class="line">    private final String serviceId;</span><br><span class="line"></span><br><span class="line">    public <span class="title function_">RoundRobinLoadBalanceRule</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        this.serviceId = serviceId;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    private <span class="type">static</span> ConcurrentHashMap serviceMap = new ConcurrentHashMap&lt;&gt;();</span><br><span class="line"></span><br><span class="line">    public <span class="type">static</span> RoundRobinLoadBalanceRule <span class="title function_">getInstance</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        RoundRobinLoadBalanceRule loadBalanceRule = serviceMap.get(serviceId);</span><br><span class="line">        <span class="keyword">if</span> (loadBalanceRule == null) &#123;</span><br><span class="line">            loadBalanceRule = new RoundRobinLoadBalanceRule(serviceId);</span><br><span class="line">            serviceMap.put(serviceId, loadBalanceRule);</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> loadBalanceRule;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public ServiceInstance <span class="title function_">choose</span><span class="params">(GatewayContext ctx)</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> choose(ctx.getUniqueId());</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public ServiceInstance <span class="title function_">choose</span><span class="params">(String serviceId)</span> &#123;</span><br><span class="line">        Set serviceInstanceSet =</span><br><span class="line">                DynamicConfigManager.getInstance().getServiceInstanceByUniqueId(serviceId);</span><br><span class="line">        <span class="keyword">if</span> (serviceInstanceSet.isEmpty()) &#123;</span><br><span class="line">            <span class="built_in">log</span>.warn(<span class="string">&quot;No instance available for:&#123;&#125;&quot;</span>, serviceId);</span><br><span class="line">            throw new NotFoundException(SERVICE_INSTANCE_NOT_FOUND);</span><br><span class="line">        &#125;</span><br><span class="line">        List instances = new ArrayList(serviceInstanceSet);</span><br><span class="line">        <span class="keyword">if</span> (instances.isEmpty()) &#123;</span><br><span class="line">            <span class="built_in">log</span>.warn(<span class="string">&quot;No instance available for service:&#123;&#125;&quot;</span>, serviceId);</span><br><span class="line">            <span class="keyword">return</span> null;</span><br><span class="line">        &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">            <span class="type">int</span> pos = Math.<span class="built_in">abs</span>(this.position.incrementAndGet());</span><br><span class="line">            <span class="keyword">return</span> instances.get(pos % instances.size());</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Finally, we will be able to select the load balancing policy for our implementation based on the load balancing policy set in the request header to be used.</p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br></pre></td><td class="code"><pre><span class="line">@Slf4j</span><br><span class="line">@FilterAspect(id=LOAD_BALANCE_FILTER_ID,</span><br><span class="line">        name = LOAD_BALANCE_FILTER_NAME,</span><br><span class="line">        order = LOAD_BALANCE_FILTER_ORDER)</span><br><span class="line">public <span class="class"><span class="keyword">class</span> <span class="title">LoadBalanceFilter</span> <span class="title">implements</span> <span class="title">Filter</span> &#123;</span></span><br><span class="line"></span><br><span class="line">    @Override</span><br><span class="line">    public <span class="type">void</span> <span class="title function_">doFilter</span><span class="params">(GatewayContext ctx)</span>&#123;</span><br><span class="line"></span><br><span class="line">        String serviceId = ctx.getUniqueId();</span><br><span class="line"></span><br><span class="line">        LoadBalanceGatewayRule gatewayLoadBalanceRule = getLoadBalanceRule(ctx);</span><br><span class="line">        ServiceInstance serviceInstance = gatewayLoadBalanceRule.choose(serviceId);</span><br><span class="line">        System.out.println(<span class="string">&quot;IP为&quot;</span>+serviceInstance.getIp()+<span class="string">&quot;,端口号：&quot;</span>+serviceInstance.getPort());</span><br><span class="line">        GatewayRequest request = ctx.getRequest();</span><br><span class="line">        <span class="keyword">if</span>(serviceInstance != null &amp;&amp; request != null)&#123;</span><br><span class="line">            String host  = serviceInstance.getIp()+<span class="string">&quot;:&quot;</span>+serviceInstance.getPort();</span><br><span class="line">            request.setModifyHost(host);</span><br><span class="line">        &#125;<span class="keyword">else</span>&#123;</span><br><span class="line">            <span class="built_in">log</span>.warn(<span class="string">&quot;No instance available for :&#123;&#125;&quot;</span>,serviceId);</span><br><span class="line">            throw new NotFoundException(SERVICE_INSTANCE_NOT_FOUND);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    public LoadBalanceGatewayRule <span class="title function_">getLoadBalanceRule</span><span class="params">(GatewayContext ctx)</span> &#123;</span><br><span class="line">        LoadBalanceGatewayRule loadBalanceRule = null;</span><br><span class="line">        Rule configRule = ctx.getRule();</span><br><span class="line">        <span class="keyword">if</span> (configRule != null) &#123;</span><br><span class="line">            Set filterConfigs = configRule.getFilterConfigs();</span><br><span class="line">            Iterator iterator = filterConfigs.iterator();</span><br><span class="line">            Rule.FilterConfig filterConfig;</span><br><span class="line">            <span class="keyword">while</span> (iterator.hasNext()) &#123;</span><br><span class="line">                filterConfig = (Rule.FilterConfig) iterator.next();</span><br><span class="line">                <span class="keyword">if</span> (filterConfig == null) &#123;</span><br><span class="line">                    <span class="keyword">continue</span>;</span><br><span class="line">                &#125;</span><br><span class="line">                String filterId = filterConfig.getId();</span><br><span class="line">                <span class="keyword">if</span> (filterId.equals(LOAD_BALANCE_FILTER_ID)) &#123;</span><br><span class="line">                    String config = filterConfig.getConfig();</span><br><span class="line">                    String strategy = LOAD_BALANCE_STRATEGY_RANDOM;</span><br><span class="line">                    <span class="keyword">if</span> (StringUtils.isNotEmpty(config)) &#123;</span><br><span class="line">                        Map mapTypeMap = JSON.parseObject(config, Map.class);</span><br><span class="line">                        strategy = mapTypeMap.getOrDefault(LOAD_BALANCE_KEY, strategy);</span><br><span class="line">                    &#125;</span><br><span class="line">                    <span class="keyword">switch</span> (strategy) &#123;</span><br><span class="line">                        <span class="keyword">case</span> LOAD_BALANCE_STRATEGY_RANDOM:</span><br><span class="line">                            loadBalanceRule = RandomLoadBalanceRule.getInstance(configRule.getServiceId());</span><br><span class="line">                            <span class="keyword">break</span>;</span><br><span class="line">                        <span class="keyword">case</span> LOAD_BALANCE_STRATEGY_ROUND_ROBIN:</span><br><span class="line">                            loadBalanceRule = RoundRobinLoadBalanceRule.getInstance(configRule.getServiceId());</span><br><span class="line">                            <span class="keyword">break</span>;</span><br><span class="line">                        <span class="keyword">default</span>:</span><br><span class="line">                            <span class="built_in">log</span>.warn(<span class="string">&quot;No loadBalance strategy for service:&#123;&#125;&quot;</span>, strategy);</span><br><span class="line">                            loadBalanceRule = RandomLoadBalanceRule.getInstance(configRule.getServiceId());</span><br><span class="line">                            <span class="keyword">break</span>;</span><br><span class="line">                    &#125;</span><br><span class="line">                &#125;</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">return</span> loadBalanceRule;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>So up to here we have successfully implemented the load balancing policy filter.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Design of Netty, a network communication framework</title>
    <link href="https://www.nablepart.com/461011c2085f/"/>
    <id>https://www.nablepart.com/461011c2085f/</id>
    <published>2023-11-05T07:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>After completing the current chapter, the code effect is demonstrated as follows:<img src="https://s2.loli.net/2023/11/05/gdeDE2mivfHUwsO.webp"> You can get the project source code and information in the video introduction <a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">link to the effect demo</a></p><p>This request will be redirected to our backend localhost:8080&#x2F;http-demo&#x2F;ping address. This is the address of our backend service.</p><h1 id="Netty-architecture"><a href="#Netty-architecture" class="headerlink" title="Netty architecture"></a>Netty architecture</h1><p>The <a href="https://netty.io/index.html">Netty website</a> philosophy helps us build scalable, high-performance, and maintainable web applications. Here are some of the core concepts of Netty:</p><p>Channel: A channel is an abstraction of data communication that can represent underlying network connections such as sockets.Netty provides several types of channels for different transport protocols (e.g., NIO, OIO, local transport).</p><p>EventLoop: EventLoop is a loop for handling events and is a core component in Netty. Each Channel has an EventLoop associated with it, which is responsible for handling all the events of the Channel, such as receiving data, processing data, sending data, etc.</p><p>ChannelHandler: ChannelHandler is a component that handles channel events and can be used to implement protocol encoding and decoding, business logic processing, etc. A ChannelHandler chain is a sequence of operations performed on a channel.</p><p>Bootstrap: Bootstrap is a tool used to start and configure a web application, which is usually used to create and connect to a Channel. it helps to set the configuration of EventLoopGroup, Channel type, ChannelHandler, etc.</p><p>ChannelPipeline: ChannelPipeline is a data structure used to maintain and process the ChannelHandler chain. It is used to specify the order in which events are processed and the direction in which they flow in order to perform specific operations on the Channel.</p><p>ByteBuffer: The ByteBuffer is the basic data structure for processing data in Netty. It provides read and write operations on the data, and supports the zero-copy feature.</p><p>Codecs: Codecs are components used to convert raw data into protocol-specific messages and messages into bytes of data; Netty provides a number of built-in codecs and also supports custom codecs.</p><p>Promise: Promise is an abstraction for handling the results of asynchronous operations. It can be used to monitor and get the result or state of an asynchronous operation.</p><p>Future: A Future is also an abstraction for handling asynchronous operations and represents an operation that has not yet completed; asynchronous operations in Netty typically return a Future that can be retrieved when the operation completes.</p><p>ByteBuf: ByteBuf is a byte data container in Netty that provides efficient read and write operations and supports reference counting for improved performance and reduced memory overhead. We will develop based on Netty Reactor working architecture, below is the architecture diagram (from web):<img src="https://s2.loli.net/2023/11/05/v9pUoj21SgOkl6N.webp"></p><h1 id="Implementing-NettyHttpServer"><a href="#Implementing-NettyHttpServer" class="headerlink" title="Implementing NettyHttpServer"></a>Implementing NettyHttpServer</h1><p>Above a brief understanding of the Netty architecture, in this point I will implement a client based on Netty. There are roughly the following steps that we need to implement to do: 1: encapsulate properties: 2: implementation of the construction and init methods 3: epoll optimization 4: implementation of the start method 5: implementation of the shutdown method</p><p>In this part we need to Write the finished code is as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> blossom.gateway.core.netty;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> blossom.gateway.common.utils.SystemUtil;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.core.LifeCycle;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.core.config.GatewayConfig;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.core.rule.Rule;</span><br><span class="line"><span class="keyword">import</span> io.netty.bootstrap.ServerBootstrap;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.Channel;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.ChannelInitializer;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.EventLoopGroup;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.epoll.Epoll;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.epoll.EpollEventLoopGroup;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.epoll.EpollServerDomainSocketChannel;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.epoll.EpollServerSocketChannel;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.nio.NioEventLoopGroup;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.socket.nio.NioServerSocketChannel;</span><br><span class="line"><span class="keyword">import</span> io.netty.handler.codec.http.HttpObjectAggregator;</span><br><span class="line"><span class="keyword">import</span> io.netty.handler.codec.http.HttpServerCodec;</span><br><span class="line"><span class="keyword">import</span> io.netty.util.concurrent.DefaultThreadFactory;</span><br><span class="line"><span class="keyword">import</span> lombok.extern.slf4j.Slf4j;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NettyHttpServer</span> <span class="keyword">implements</span> <span class="title class_">LifeCycle</span> &#123;</span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> GatewayConfig config;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> ServerBootstrap serverBootstrap;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> EventLoopGroup eventLoopGroupBoss;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> EventLoopGroup eventLoopGroupWorker;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">NettyHttpServer</span><span class="params">(GatewayConfig config)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.config = config;</span><br><span class="line">        init();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">init</span><span class="params">()</span> &#123;</span><br><span class="line"></span><br><span class="line">        <span class="keyword">if</span> (canUserEpoll()) &#123;</span><br><span class="line">            <span class="built_in">this</span>.serverBootstrap = <span class="keyword">new</span> <span class="title class_">ServerBootstrap</span>();</span><br><span class="line">            <span class="built_in">this</span>.eventLoopGroupBoss = <span class="keyword">new</span> <span class="title class_">EpollEventLoopGroup</span>(config.getEventLoopGroupBossNum(),</span><br><span class="line">                    <span class="keyword">new</span> <span class="title class_">DefaultThreadFactory</span>(<span class="string">&quot;netty-boss-nio&quot;</span>));</span><br><span class="line">            <span class="built_in">this</span>.eventLoopGroupWorker = <span class="keyword">new</span> <span class="title class_">EpollEventLoopGroup</span>(config.getEventLoopGroupWorkerNum(),</span><br><span class="line">                    <span class="keyword">new</span> <span class="title class_">DefaultThreadFactory</span>(<span class="string">&quot;netty-worker-nio&quot;</span>));</span><br><span class="line">        &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">            <span class="built_in">this</span>.serverBootstrap = <span class="keyword">new</span> <span class="title class_">ServerBootstrap</span>();</span><br><span class="line">            <span class="built_in">this</span>.eventLoopGroupBoss = <span class="keyword">new</span> <span class="title class_">NioEventLoopGroup</span>(config.getEventLoopGroupBossNum(),</span><br><span class="line">                    <span class="keyword">new</span> <span class="title class_">DefaultThreadFactory</span>(<span class="string">&quot;netty-boss-nio&quot;</span>));</span><br><span class="line">            <span class="built_in">this</span>.eventLoopGroupWorker = <span class="keyword">new</span> <span class="title class_">NioEventLoopGroup</span>(config.getEventLoopGroupWorkerNum(),</span><br><span class="line">                    <span class="keyword">new</span> <span class="title class_">DefaultThreadFactory</span>(<span class="string">&quot;netty-worker-nio&quot;</span>));</span><br><span class="line"></span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="type">boolean</span> <span class="title function_">canUserEpoll</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> SystemUtil.isLinux() &amp;&amp; Epoll.isAvailable();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">start</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.serverBootstrap.group(eventLoopGroupBoss,</span><br><span class="line">                eventLoopGroupWorker)</span><br><span class="line">                .channel(canUserEpoll() ?</span><br><span class="line">                EpollServerSocketChannel.class : NioServerSocketChannel.class)</span><br><span class="line">                .childHandler(<span class="keyword">new</span> <span class="title class_">ChannelInitializer</span>() &#123;</span><br><span class="line"></span><br><span class="line">            <span class="keyword">protected</span> <span class="keyword">void</span> <span class="title function_">initChannel</span><span class="params">(Channel channel)</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line">                channel.pipeline().addLast(</span><br><span class="line"></span><br><span class="line">                        <span class="keyword">new</span> <span class="title class_">HttpServerCodec</span>(),</span><br><span class="line">                        <span class="keyword">new</span> <span class="title class_">HttpObjectAggregator</span>(config.getMaxContentLength()),</span><br><span class="line">                        <span class="keyword">new</span> <span class="title class_">NettyServerConnectionManager</span>(),</span><br><span class="line">                        <span class="keyword">new</span> <span class="title class_">NettyHttpServerHandler</span>());</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;);</span><br><span class="line">        <span class="keyword">try</span> &#123;</span><br><span class="line">            <span class="built_in">this</span>.serverBootstrap.bind().sync();</span><br><span class="line">            log.info(<span class="string">&quot;server startup on port &#123;&#125;&quot;</span>,config.getPort());</span><br><span class="line">        &#125; <span class="keyword">catch</span> (InterruptedException e) &#123;</span><br><span class="line">            <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">shutdown</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (eventLoopGroupBoss!=<span class="literal">null</span>)&#123;</span><br><span class="line">            eventLoopGroupBoss.shutdownGracefully();</span><br><span class="line">        &#125;</span><br><span class="line">        <span class="keyword">if</span> (eventLoopGroupWorker!=<span class="literal">null</span>)&#123;</span><br><span class="line">            eventLoopGroupWorker.shutdownGracefully();</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Implementing-NettyHttpServerHandler"><a href="#Implementing-NettyHttpServerHandler" class="headerlink" title="Implementing NettyHttpServerHandler"></a>Implementing NettyHttpServerHandler</h1><p>Our NettyHttpServerHandler inherits from ChannelInboundHandlerAdapter. Inheriting from ChannelInboundHandlerAdapter is to implement custom inbound data handling logic. ChannelInboundHandlerAdapter is an abstract class provided by Netty that implements the ChannelInboundHandler interface, which provides a set of default methods that allow developers to easily handle inbound events. The exact functionality can be searched on the web. In this module, we probably need to achieve the function and implementation steps are as follows: 1: inherit ChannelInboundHandlerAdapter 2: implement channelRead 3: delegate the logic to NettyProcessor</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> blossom.gateway.core.netty;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> blossom.gateway.core.wrapper.HttpRequestWrapper;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.ChannelHandlerContext;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.ChannelInboundHandlerAdapter;</span><br><span class="line"><span class="keyword">import</span> io.netty.handler.codec.http.FullHttpRequest;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NettyHttpServerHandler</span> <span class="keyword">extends</span> <span class="title class_">ChannelInboundHandlerAdapter</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> NettyProcessor processor;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">NettyHttpServerHandler</span><span class="params">(NettyProcessor processor)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.processor = processor;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">channelRead</span><span class="params">(ChannelHandlerContext ctx,Object msg)</span> <span class="keyword">throws</span> Exception&#123;</span><br><span class="line">        <span class="type">FullHttpRequest</span> <span class="variable">request</span> <span class="operator">=</span> (FullHttpRequest)msg;</span><br><span class="line">        <span class="type">HttpRequestWrapper</span> <span class="variable">wrapper</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">HttpRequestWrapper</span>();</span><br><span class="line">        wrapper.setFullHttpRequest(request);</span><br><span class="line">        wrapper.setContext(ctx);</span><br><span class="line"></span><br><span class="line">        processor.process(wrapper);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Implement-NettyProcessor"><a href="#Implement-NettyProcessor" class="headerlink" title="Implement NettyProcessor"></a>Implement NettyProcessor</h1><p>This interface as the core interface, we need to implement the following functions: 1: define the interface 2: the minimum usable version of the implementation 3: routing function implementation 4: get asynchronous configuration, the implementation of complete method 5: exception handling 6: write back the response information and release resources</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> blossom.project.core.netty.processor;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> blossom.gateway.common.enums.ResponseCode;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.common.exception.BaseException;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.common.exception.ConnectException;</span><br><span class="line"><span class="keyword">import</span> blossom.gateway.common.exception.ResponseException;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.ConfigLoader;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.context.GatewayContext;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.context.HttpRequestWrapper;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.helper.AsyncHttpHelper;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.helper.RequestHelper;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.helper.ResponseHelper;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.response.GatewayResponse;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.ChannelFutureListener;</span><br><span class="line"><span class="keyword">import</span> io.netty.channel.ChannelHandlerContext;</span><br><span class="line"><span class="keyword">import</span> io.netty.handler.codec.http.FullHttpRequest;</span><br><span class="line"><span class="keyword">import</span> io.netty.handler.codec.http.FullHttpResponse;</span><br><span class="line"><span class="keyword">import</span> io.netty.util.ReferenceCountUtil;</span><br><span class="line"><span class="keyword">import</span> lombok.extern.slf4j.Slf4j;</span><br><span class="line"><span class="keyword">import</span> org.asynchttpclient.Request;</span><br><span class="line"><span class="keyword">import</span> org.asynchttpclient.Response;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Objects;</span><br><span class="line"><span class="keyword">import</span> java.util.concurrent.CompletableFuture;</span><br><span class="line"><span class="keyword">import</span> java.util.concurrent.TimeoutException;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NettyCoreProcessor</span> <span class="keyword">implements</span> <span class="title class_">NettyProcessor</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">process</span><span class="params">(HttpRequestWrapper wrapper)</span> &#123;</span><br><span class="line">        <span class="type">FullHttpRequest</span> <span class="variable">request</span> <span class="operator">=</span> wrapper.getRequest();</span><br><span class="line">        <span class="type">ChannelHandlerContext</span> <span class="variable">ctx</span> <span class="operator">=</span> wrapper.getCtx();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">try</span> &#123;</span><br><span class="line">            <span class="type">GatewayContext</span> <span class="variable">gatewayContext</span> <span class="operator">=</span> RequestHelper.doContext(request, ctx);</span><br><span class="line">            route(gatewayContext);</span><br><span class="line">        &#125; <span class="keyword">catch</span> (BaseException e) &#123;</span><br><span class="line">            log.error(<span class="string">&quot;process error &#123;&#125; &#123;&#125;&quot;</span>, e.getCode().getCode(), e.getCode().getMessage());</span><br><span class="line">            <span class="type">FullHttpResponse</span> <span class="variable">httpResponse</span> <span class="operator">=</span> ResponseHelper.getHttpResponse(e.getCode());</span><br><span class="line"></span><br><span class="line">            doWriteAndRelease(ctx, request, httpResponse);</span><br><span class="line">        &#125; <span class="keyword">catch</span> (Throwable t) &#123;</span><br><span class="line">            log.error(<span class="string">&quot;process unkown error&quot;</span>, t);</span><br><span class="line">            <span class="type">FullHttpResponse</span> <span class="variable">httpResponse</span> <span class="operator">=</span> ResponseHelper.getHttpResponse(ResponseCode.INTERNAL_ERROR);</span><br><span class="line">            doWriteAndRelease(ctx, request, httpResponse);</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">doWriteAndRelease</span><span class="params">(ChannelHandlerContext ctx, FullHttpRequest request, FullHttpResponse httpResponse)</span> &#123;</span><br><span class="line">        ctx.writeAndFlush(httpResponse)</span><br><span class="line">                .addListener(ChannelFutureListener.CLOSE);</span><br><span class="line">        ReferenceCountUtil.release(request);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">route</span><span class="params">(GatewayContext gatewayContext)</span> &#123;</span><br><span class="line">        <span class="type">Request</span> <span class="variable">request</span> <span class="operator">=</span> gatewayContext.getRequest().build();</span><br><span class="line">        <span class="type">CompletableFuture</span> <span class="variable">future</span> <span class="operator">=</span> AsyncHttpHelper.getInstance().executeRequest(request);</span><br><span class="line"></span><br><span class="line">        <span class="type">boolean</span> <span class="variable">whenComplete</span> <span class="operator">=</span> ConfigLoader.getConfig().isWhenComplete();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">if</span> (whenComplete) &#123;</span><br><span class="line">            future.whenComplete((response, throwable) -&gt; &#123;</span><br><span class="line">               complete(request, response, throwable, gatewayContext);</span><br><span class="line">            &#125;);</span><br><span class="line">        &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">            future.whenCompleteAsync((response, throwable) -&gt; &#123;</span><br><span class="line">                complete(request, response, throwable, gatewayContext);</span><br><span class="line">            &#125;);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">complete</span><span class="params">(Request request,</span></span><br><span class="line"><span class="params">                          Response response,</span></span><br><span class="line"><span class="params">                          Throwable throwable,</span></span><br><span class="line"><span class="params">                          GatewayContext gatewayContext)</span> &#123;</span><br><span class="line">        gatewayContext.releaseRequest();</span><br><span class="line"></span><br><span class="line">        <span class="keyword">try</span> &#123;</span><br><span class="line"></span><br><span class="line">            <span class="keyword">if</span> (Objects.nonNull(throwable)) &#123;</span><br><span class="line">                <span class="type">String</span> <span class="variable">url</span> <span class="operator">=</span> request.getUrl();</span><br><span class="line">                <span class="keyword">if</span> (throwable <span class="keyword">instanceof</span> TimeoutException) &#123;</span><br><span class="line">                    log.warn(<span class="string">&quot;complete time out &#123;&#125;&quot;</span>, url);</span><br><span class="line">                    gatewayContext.setThrowable(<span class="keyword">new</span> <span class="title class_">ResponseException</span>(ResponseCode.REQUEST_TIMEOUT));</span><br><span class="line">                &#125; <span class="keyword">else</span> &#123;</span><br><span class="line">                    gatewayContext.setThrowable(<span class="keyword">new</span> <span class="title class_">ConnectException</span>(throwable,</span><br><span class="line">                            gatewayContext.getUniqueId(),</span><br><span class="line">                            url, ResponseCode.HTTP_RESPONSE_ERROR));</span><br><span class="line">                &#125;</span><br><span class="line">            &#125; <span class="keyword">else</span> &#123;</span><br><span class="line"></span><br><span class="line">                gatewayContext.setResponse(GatewayResponse.buildGatewayResponse(response));</span><br><span class="line">            &#125;</span><br><span class="line">        &#125; <span class="keyword">catch</span> (Throwable t) &#123;</span><br><span class="line">            gatewayContext.setThrowable(<span class="keyword">new</span> <span class="title class_">ResponseException</span>(ResponseCode.INTERNAL_ERROR));</span><br><span class="line">            log.error(<span class="string">&quot;complete error&quot;</span>, t);</span><br><span class="line">        &#125; <span class="keyword">finally</span> &#123;</span><br><span class="line">            gatewayContext.written();</span><br><span class="line">            ResponseHelper.writeResponse(gatewayContext);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Implementing-NettyHttpClient"><a href="#Implementing-NettyHttpClient" class="headerlink" title="Implementing NettyHttpClient"></a>Implementing NettyHttpClient</h1><p>Above we have implemented the Server server, now start to implement the client. The approximate steps are as follows: 1: implementation of LifeCyclke interface 2: encapsulation of properties 3: implementation of the init method 4: implementation of the start method 5: implementation of the shutdown method can be found in fact, the implementation of the client and the implementation of the server are more or less the same.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">NettyHttpClient</span> <span class="keyword">implements</span> <span class="title class_">LifeCycle</span> &#123;</span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> Config config;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> EventLoopGroup eventLoopGroupWoker;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> AsyncHttpClient asyncHttpClient;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">NettyHttpClient</span><span class="params">(Config config, EventLoopGroup eventLoopGroupWoker)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.config = config;</span><br><span class="line">        <span class="built_in">this</span>.eventLoopGroupWoker = eventLoopGroupWoker;</span><br><span class="line">        init();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">init</span><span class="params">()</span> &#123;</span><br><span class="line">        DefaultAsyncHttpClientConfig.<span class="type">Builder</span> <span class="variable">builder</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">DefaultAsyncHttpClientConfig</span>.Builder()</span><br><span class="line">                .setEventLoopGroup(eventLoopGroupWoker)</span><br><span class="line">                .setConnectTimeout(config.getHttpConnectTimeout())</span><br><span class="line">                .setRequestTimeout(config.getHttpRequestTimeout())</span><br><span class="line">                .setMaxRedirects(config.getHttpMaxRequestRetry())</span><br><span class="line">                .setAllocator(PooledByteBufAllocator.DEFAULT)</span><br><span class="line">                .setCompressionEnforced(<span class="literal">true</span>)</span><br><span class="line">                .setMaxConnections(config.getHttpMaxConnections())</span><br><span class="line">                .setMaxConnectionsPerHost(config.getHttpConnectionsPerHost())</span><br><span class="line">                .setPooledConnectionIdleTimeout(config.getHttpPooledConnectionIdleTimeout());</span><br><span class="line">        <span class="built_in">this</span>.asyncHttpClient = <span class="keyword">new</span> <span class="title class_">DefaultAsyncHttpClient</span>(builder.build());</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">start</span><span class="params">()</span> &#123;</span><br><span class="line">        AsyncHttpHelper.getInstance().initialized(asyncHttpClient);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">shutdown</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (asyncHttpClient != <span class="literal">null</span>) &#123;</span><br><span class="line">            <span class="keyword">try</span> &#123;</span><br><span class="line">                <span class="built_in">this</span>.asyncHttpClient.close();</span><br><span class="line">            &#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">                log.error(<span class="string">&quot;NettyHttpClient shutdown error&quot;</span>, e);</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h1 id="Implementing-the-core-container"><a href="#Implementing-the-core-container" class="headerlink" title="Implementing the core container"></a>Implementing the core container</h1><p>In the above we have completed the implementation of Netty-related code, we are going to implement the core container. The approximate steps are as follows: 1: implementation of the LifeCyclke interface 2: encapsulation of properties 3: implementation of the init method 4: implementation of the start method 5: implementation of the shutdown method The implementation of the core container can be used to start our Netty clients and servers, as long as the completion of the step, we have a simple request forwarding and receiving has been completed.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> blossom.project.core;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> blossom.project.core.netty.NettyHttpClient;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.netty.NettyHttpServer;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.netty.processor.NettyCoreProcessor;</span><br><span class="line"><span class="keyword">import</span> blossom.project.core.netty.processor.NettyProcessor;</span><br><span class="line"><span class="keyword">import</span> lombok.extern.slf4j.Slf4j;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">Container</span> <span class="keyword">implements</span> <span class="title class_">LifeCycle</span> &#123;</span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">final</span> Config config;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> NettyHttpServer nettyHttpServer;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> NettyHttpClient nettyHttpClient;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> NettyProcessor nettyProcessor;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="title function_">Container</span><span class="params">(Config config)</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.config = config;</span><br><span class="line">        init();</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">init</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="built_in">this</span>.nettyProcessor = <span class="keyword">new</span> <span class="title class_">NettyCoreProcessor</span>();</span><br><span class="line"></span><br><span class="line">        <span class="built_in">this</span>.nettyHttpServer = <span class="keyword">new</span> <span class="title class_">NettyHttpServer</span>(config, nettyProcessor);</span><br><span class="line"></span><br><span class="line">        <span class="built_in">this</span>.nettyHttpClient = <span class="keyword">new</span> <span class="title class_">NettyHttpClient</span>(config,</span><br><span class="line">                nettyHttpServer.getEventLoopGroupWoker());</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">start</span><span class="params">()</span> &#123;</span><br><span class="line">        nettyHttpServer.start();;</span><br><span class="line">        nettyHttpClient.start();</span><br><span class="line">        log.info(<span class="string">&quot;api gateway started!&quot;</span>);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">shutdown</span><span class="params">()</span> &#123;</span><br><span class="line">        nettyHttpServer.shutdown();</span><br><span class="line">        nettyHttpClient.shutdown();</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="Demonstration"><a href="#Demonstration" class="headerlink" title="Demonstration"></a>Demonstration</h1><p><img src="https://s2.loli.net/2023/11/05/eogl3C51jU9cwzO.webp"></p><p><img src="https://s2.loli.net/2023/11/05/gdeDE2mivfHUwsO.webp"></p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Design points of self-developed gateway and architectural design</title>
    <link href="https://www.nablepart.com/60c063eada0d/"/>
    <id>https://www.nablepart.com/60c063eada0d/</id>
    <published>2023-11-05T06:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p><a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">Link to effect demo</a></p><h1 id="Flow-of-the-request"><a href="#Flow-of-the-request" class="headerlink" title="Flow of the request"></a>Flow of the request</h1><p>An HTTP request sent to a gateway and completing the entire lifecycle typically includes the following steps:</p><p>Client Request: The request begins on the client side, which sends the request via an HTTP request (e.g., GET, POST, etc.) to the entry point of the API gateway.</p><p>API Gateway Receive: The API gateway acts as the first point of receipt for the request, it listens for requests from the client, usually on port 80 (HTTP) or 443 (HTTPS).</p><p>Request Routing: The API gateway routes the request to the appropriate backend service based on the request’s destination path or domain information. This can be based on configured routing rules or dynamic routing based on the path in the request.</p><p>Request Authentication: Prior to routing, the API gateway may perform request authentication, including authentication, API key validation, access token validation, and so on. This ensures that only authorized clients can access the backend service.</p><p>Request Transformation: The API gateway may need to transform the data format of the request to ensure that the request matches the data format expected by the back-end service. This can involve data conversion or remapping of request parameters.</p><p>Load balancing: If there are multiple instances of the backend service, the API Gateway may perform load balancing to select an appropriate instance of the backend service to handle the request. This ensures that requests are evenly distributed to the back-end service.</p><p>Request Proxy: Once routing and authentication are complete, the API Gateway proxies the request to the selected backend service. This typically involves resending the request to the HTTP endpoint of the backend service.</p><p>Backend Processing: The backend service receives the request and performs the appropriate action based on the content of the request. This can be querying the database, processing business logic, generating a response, etc.</p><p>Response Transformation: The backend service generates the response and returns it to the API gateway, which may need to convert the response to the format expected by the client, perform data conversion, etc. The API gateway then converts the response to the format expected by the client, and returns it to the API gateway.</p><p>Response Delivery: Finally, the API gateway sends the processed response back to the client.</p><p>Response Validation: Before sending the response to the client, the API gateway may also perform response validation, including authorization checks, addition or modification of response headers, and security checks.</p><p>Response Delivery: Finally, the API Gateway delivers the response to the client, which receives it and performs the appropriate action.</p><h1 id="Architecture-Design"><a href="#Architecture-Design" class="headerlink" title="Architecture Design"></a>Architecture Design</h1><p>Referring to the design of the current mainstream gateways, there are SpringCloud Gateway as well as Zuul, both of them make heavy use of the idea of asynchronous programming in the underlying layer, and both of them also have a very important design on the network communication. For example, when I first looked at the source code for SpringCloudGateway, I saw a lot of use of Netty.</p><p>Since our gateway is self-developed, i.e., it is a separate service in its own right, we do not need to use a framework such as SpringBoot, we can directly use the native Java framework to write a variety of important code.</p><p>Network communication without doubt on the use of Netty can be.</p><p>At the same time, in the first article also mentioned, the gateway is required to use the registry, because our request specific last to forward to the route, is required to pull service information from the registry, the current registry: Zookeeper, Eureka, Nacos, Apollo, etcd, Consul. They have their own advantages and disadvantages, such as the Zk guaranteed Zk guarantees CP instead of AP, we know that the gateway is the first gateway to the application, we use Dubbo will use Zk, but for the gateway, availability is greater than consistency, so Zk we do not choose. Eureka and SpringCloud ecosystem have a relatively close connection, so if we use it, it will increase the coupling of our gateway and SpringCloud, not quite in line with the original intention of our self-research, so also do not choose. Etcd, although it is a generic key-value pair distributed storage system that can be well applied to distributed systems, but still no good advantage, of course, he is very lightweight. So it is not considered here. Consul and Etcd are more or less the same, so Consul is not considered here. Here I choose Nacos as the registry, Nacos first supports CP and AP protocols, and provides a good console to facilitate the management of my services. At the same time, the Nacos community is relatively very active, and there is more information on the Internet. At the same time, I have also read the source code of Nacos, which is elegantly written and relatively easy to understand. At the same time, I believe that more people will use Nacos, so I choose Nacos as the registry here. Of course, the above several registration center can be used, there is no particularly obvious advantages and disadvantages, they also have their own appropriate occasions, specific occasions for specific analysis, mainly to analyze their own team to understand more or suitable for which kind of registration center. Configuration center, SpringCloud Config, Apollo, Nacos.</p><p>Here it is clear that still choose Nacos, because Nacos is not only a registration center is also a configuration center. Therefore, by choosing Nacos, we can minimize the introduction of unnecessary third-party components. So for the technology stack, the following technologies were chosen:</p><ul><li>Java</li><li>Netty</li><li>Nacos</li></ul><h1 id="Design-Points"><a href="#Design-Points" class="headerlink" title="Design Points"></a>Design Points</h1><p>There are some important design points to consider when designing a high performance gateway</p><p><strong>Serialization</strong> The use of parallelism naturally has its benefits in terms of speeding up the processing of some services, but because it involves operating system and thread level operations, there are some scenarios where the use of serialization can be used to optimize performance. For example, for less time-consuming scenarios with high performance requirements, we can use serialization. For scenarios that take a long time and have no dependencies between tasks, such as RPC remote calls, we can use parallelization.</p><p><strong>Asynchronization</strong> Using asynchronous processing improves performance by allowing the gateway to process multiple requests at the same time without being blocked. This can be achieved by employing asynchronous frameworks, event-driven programming models, or multi-threaded&#x2F;multi-process processing. We can consider asynchronization in the following places: ** request forwarding asynchronization, request response asynchronization, plugin filtering asynchronization. ** **The mode of simultaneous asynchronization can be considered as single asynchronous mode or double asynchronous mode. ** This means that the time when the sender receives and sends data does not have to be completely synchronized. Where single asynchronous mode means that only one direction is available for data transfer at the same time, while double asynchronous means that both directions are available for data transfer, and then the speed of data transfer will be much higher. We can consider using single asynchronous mode for plugin filtering and double asynchronous mode for request response. And double asynchronous mode is suitable for application in the downstream server performance is not very high scenarios. Not very understanding of single-asynchronous and double-asynchronous can first Google simple understanding.</p><p><strong>Cache</strong> Cache commonly used response data to reduce the load on the back-end service. A proper caching strategy can significantly improve response time and reduce resource usage. Consider using a cache like Caffeine, the gateway recommends memory-level caching.</p><p><strong>Throughput</strong> In moments of high traffic, we generally need to do some peak shaving and flow limiting. For example, the use of some message middleware such as RocketMQ, but in the self-developed cache, the use of message middleware will increase the coupling of our system, and message middleware will also need to be transmitted over the network, which will prompt the RT time of our system. So we need to consider using a local buffer, such as using Disruptor.</p><p><strong>Reasonable Configuration of Threads</strong> We all know that there are CPU-intensive and IO-intensive tasks. So reasonable configuration of the number of threads can also improve our project performance. For example, for common CPU-intensive tasks, we will set the number of threads to n+1, and for IO-intensive tasks, we will set the number of threads to 2n. All these are to improve our performance to some extent.</p><h1 id="Project-Architecture"><a href="#Project-Architecture" class="headerlink" title="Project Architecture"></a>Project Architecture</h1><p><img src="https://s2.loli.net/2023/11/05/Wzr2ol56HwbCMhg.webp">Common: Maintain public code, such as enumeration Client: Client module to facilitate our other modules to access the gateway Register Center: Registration Center module Config Center: Configuration Center module Container: contains the core functionality Context: request context, the rules FilterChain: through the Chain of Responsibility mode, chained execution of filters FlowFilter: flow control filters LoadBalanceFilter: load balancing filters RouterFilter: routing filters TimeoutFilter: timeout filters OtherFilter: other filters NettyHttpServer: receive external requests and flow internally Processor: Background request processing Flusher: Performance optimization MPMC: Performance optimization SPI Loader: Extension Loader Plugin Loader: Plugin Loader Dynamic Loader: Dynamic Configuration Loader Config Loader: Static Configuration Loader</p><h1 id="Process-Design"><a href="#Process-Design" class="headerlink" title="Process Design"></a>Process Design</h1><p><img src="https://s2.loli.net/2023/11/05/HQR8lWBUiXmfMTK.webp"> With this more detailed processing flow above, we can probably start preparing to write the code. Stay tuned for the following, which will start the actual writing of the code.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 Architecture construction of a self-developed gateway</title>
    <link href="https://www.nablepart.com/bd45972a50a9/"/>
    <id>https://www.nablepart.com/bd45972a50a9/</id>
    <published>2023-11-05T05:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p><a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=1d4d63e205b3ad352b4771f87295d16d#reply747752344">Link to effect demo</a></p><h1 id="Build-the-project-skeleton"><a href="#Build-the-project-skeleton" class="headerlink" title="Build the project skeleton"></a>Build the project skeleton</h1><p>The IDE tool I use here is IDEA. From the above, we know that our project has about five modules, Client, Common, Register Center, Config Center and Core. <img src="https://s2.loli.net/2023/11/05/ZqyUlpkLC7uBw6D.webp"> The initialization of the subfactory is written as follows. <img src="https://s2.loli.net/2023/11/05/R7uQjoOXJETNSdP.webp"> The initialization of the subfactory is written as follows.<img src="https://s2.loli.net/2023/11/05/dCj9tXP7BqIAZH6.webp"> After that we can start building our starter class in our Core module. <img src="https://s2.loli.net/2023/11/05/XiqkurTF9Nd2Wy8.webp"> At this point our project skeleton has been built.</p><h1 id="Domain-Model-and-DDD"><a href="#Domain-Model-and-DDD" class="headerlink" title="Domain Model and DDD"></a>Domain Model and DDD</h1><p>Domain Model is a core concept in Domain-Driven Design (DDD), which is an abstract model used to represent and describe a specific domain. A domain model is a structured, object-oriented programming model for capturing and reflecting concepts, rules, and relationships in the actual business domain.</p><p>The advantages of a domain model are improved understanding of the business domain, increased modularity, and enhanced business rules, the disadvantages are the potential for increased complexity and development time, and the role is to realize business requirements and ensure that the software system is consistent with the business.</p><p>The role of domain modeling in software development includes:</p><p>Abstraction of Business Concepts: Domain modeling helps to abstract the various concepts and entities in the business domain into objects, classes and relationships in a programming language. This helps the development team to better understand the business requirements.</p><p>Representation of Business Rules: A domain model can contain business rules that can be represented in the model in the form of methods, attributes or constraints. This helps to ensure that the application follows the business rules.</p><p>Definition of Domain Objects: A domain model contains domain objects such as entities, value objects, aggregate roots, etc. that have business semantics and interact with other objects in the model.</p><p>Modeling of Business Processes: Domain models can be used to represent business processes, state transitions, and workflows. This helps developers to better understand and model business processes.</p><p>Separation of problem domain and solution: Domain modeling helps to separate the problem domain (business domain) from the solution domain (software development domain) so that developers can focus more on solving business problems.</p><p>Maintainability and Extensibility: The abstraction and clarity of the domain model makes the code easier to maintain and extend because it reflects the structure and rules of the business domain.</p><p>In short, the role of the domain model is to design for a <strong>specific business</strong>, and for this business, we will find out a lot of features related to this business, and if you change a business scenario, the current domain model may not be appropriate, you have to continue to design a new one. But for the current scenario, even if there are continuous requirements coming in, the domain model can ensure better usability because it is very well adapted to the business scenario in the previous design process.</p><p>And the design idea of domain model is probably:</p><ul><li>Find corresponding domain objects</li><li>Create corresponding entities</li><li>Setting up the corresponding link relationship for the entities</li></ul><p>Then according to the above process, we know that, for our gateway project, after the service loading configuration is started, the gateway server receives the front-end request, after the request is parsed, the composition of the internal parameters in the internal flow of the gateway, such as serialization after a set of filters filtered, forwarded to the back-end service, the back-end service to process the request, the results of the gateway to return to the client. After the service loading configuration is started, the gateway server receives the front-end request, parses the request, and then composes the internal parameters to flow inside the gateway, for example, after serialization is filtered by a set of filters, it is forwarded to the back-end service, and after the back-end service handles the request, the result is returned to the client by the gateway. So we can get the following domain objects: Context, Request, Response, Config, Processor, Filter, FilterChain, Rule, HTTP request object.</p><p>After getting the domain object, we can start to set the corresponding entity object. For example, our Request should have some request time, parameters, id number, etc., Response should have the return value, status code and other information. And a Context context, is a complete request object processing.</p><p>So we probably also came up with the connection between the entities, that is: Context contains Request and Response.</p><h1 id="Core-Context-Model-Encapsulation"><a href="#Core-Context-Model-Encapsulation" class="headerlink" title="Core Context Model Encapsulation"></a>Core Context Model Encapsulation</h1><p>The encapsulation of the core Context model involves the following steps:</p><ul><li>Context context core definition</li><li>BaseContext base context implementation</li><li>Context parameters</li><li>GatewayRequest implementation</li><li>GatewayResponse implementation</li><li>Rule, FilterConfig implementation</li><li>Final GatewayContext implementation</li></ul><p>We start by defining an abstract class that defines some core functional actions. After the design is complete the code will look roughly as follows: <img src="https://s2.loli.net/2023/11/05/cWipOKyBjIFzerG.webp"> I won’t post the exact code implementation here. But I will briefly introduce why these classes are needed here. Let’s start with IContext, this is the top-level interface class for gateway contexts, which defines a number of methods. The role of this class is mainly to qualify some basic operations of the current gateway request. For example, whether the current gateway request is normally executed, or whether there is an exception, whether the request is a long connection, whether the request has a callback function and so on. After the Context is the internal Request and Response implementation. That is, the realization of the gateway request and gateway response information. Gateway request information includes the gateway’s global start and end time, request id, request path, request host, path parameters, request body parameters, cookies, and the gateway will be forwarded to the back-end service after processing the information. The gateway response information includes response content, response status code, and asynchronous response object. After that, we need to set a rule to filter. And the rules are organized.</p><h1 id="Static-configuration-loading"><a href="#Static-configuration-loading" class="headerlink" title="Static configuration loading"></a>Static configuration loading</h1><p>After that we need to start writing some classes and methods to load the configuration of our gateway. Here are the configuration class details, which simply provide some information about the configuration class.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">GatewayConfig</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">int</span> <span class="variable">port</span> <span class="operator">=</span> <span class="number">8080</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">String</span> <span class="variable">applicationName</span> <span class="operator">=</span> <span class="string">&quot;blossom-gateway&quot;</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">String</span> <span class="variable">registryAddress</span> <span class="operator">=</span> <span class="string">&quot;localhost:8848&quot;</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">String</span> <span class="variable">env</span> <span class="operator">=</span> <span class="string">&quot;dev&quot;</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">int</span> <span class="variable">eventLoopGroupBossNum</span> <span class="operator">=</span> <span class="number">1</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">int</span> <span class="variable">eventLoopGroupWorkerNum</span> <span class="operator">=</span> Runtime.getRuntime().availableProcessors();</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">int</span> <span class="variable">maxContentLength</span> <span class="operator">=</span> <span class="number">64</span> * <span class="number">1024</span> * <span class="number">1024</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="type">boolean</span> <span class="variable">oddEvenAsync</span> <span class="operator">=</span> <span class="literal">false</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>And the following provides configuration loading methods, respectively, from the configuration file, environment variables, JVM parameters, runtime parameters for loading configuration information.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> blossom.gateway.core.config;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> cn.hutool.core.bean.BeanUtil;</span><br><span class="line"><span class="keyword">import</span> cn.hutool.core.collection.CollectionUtil;</span><br><span class="line"><span class="keyword">import</span> com.alibaba.fastjson2.util.BeanUtils;</span><br><span class="line"><span class="keyword">import</span> lombok.extern.slf4j.Slf4j;</span><br><span class="line"><span class="keyword">import</span> org.apache.commons.lang3.ArrayUtils;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.io.IOException;</span><br><span class="line"><span class="keyword">import</span> java.io.InputStream;</span><br><span class="line"><span class="keyword">import</span> java.util.Arrays;</span><br><span class="line"><span class="keyword">import</span> java.util.Map;</span><br><span class="line"><span class="keyword">import</span> java.util.Objects;</span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">ConfigLoader</span> &#123;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">final</span> <span class="type">String</span> <span class="variable">CONFIG_FILE</span> <span class="operator">=</span> <span class="string">&quot;gateway.properties&quot;</span>;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">final</span> <span class="type">String</span> <span class="variable">ENV_PREFIX</span> <span class="operator">=</span> <span class="string">&quot;GATEWAY_&quot;</span>;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">final</span> <span class="type">String</span> <span class="variable">JVM_PREFIX</span> <span class="operator">=</span> <span class="string">&quot;gateway.&quot;</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">static</span> <span class="keyword">final</span> <span class="type">ConfigLoader</span> <span class="variable">INSTANCE</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ConfigLoader</span>();</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> GatewayConfig config;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="title function_">ConfigLoader</span><span class="params">()</span> &#123;</span><br><span class="line"></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> ConfigLoader <span class="title function_">getInstance</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> INSTANCE;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> GatewayConfig <span class="title function_">getConfig</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="keyword">return</span> INSTANCE.config;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">public</span> GatewayConfig <span class="title function_">load</span><span class="params">(String[] args)</span> &#123;</span><br><span class="line"></span><br><span class="line">        config = <span class="keyword">new</span> <span class="title class_">GatewayConfig</span>();</span><br><span class="line"></span><br><span class="line">        loadFromConfigFile();</span><br><span class="line"></span><br><span class="line">        loadFromEnv();</span><br><span class="line"></span><br><span class="line">        loadFromJvm();</span><br><span class="line"></span><br><span class="line">        loadFromRuntimeArgs(args);</span><br><span class="line">        <span class="keyword">return</span> config;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">loadFromRuntimeArgs</span><span class="params">(String[] args)</span> &#123;</span><br><span class="line">        <span class="keyword">if</span> (ArrayUtils.isNotEmpty(args)) &#123;</span><br><span class="line">            <span class="type">Properties</span> <span class="variable">properties</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Properties</span>();</span><br><span class="line">            <span class="keyword">for</span> (String arg : args) &#123;</span><br><span class="line">                <span class="keyword">if</span> (arg.startsWith(<span class="string">&quot;--&quot;</span>) &amp;&amp; arg.contains(<span class="string">&quot;=&quot;</span>)) &#123;</span><br><span class="line">                    properties.put(arg.substring(<span class="number">2</span>, arg.indexOf(<span class="string">&quot;=&quot;</span>)), arg.substring(arg.indexOf(<span class="string">&quot;=&quot;</span>) + <span class="number">1</span>));</span><br><span class="line">                &#125;</span><br><span class="line">            &#125;</span><br><span class="line">            BeanUtil.copyProperties(properties, config);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">loadFromJvm</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="type">Properties</span> <span class="variable">properties</span> <span class="operator">=</span> System.getProperties();</span><br><span class="line">        BeanUtil.copyProperties(properties, config);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">loadFromEnv</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="type">Map</span> <span class="variable">env</span> <span class="operator">=</span> System.getenv();</span><br><span class="line">        <span class="type">Properties</span> <span class="variable">properties</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Properties</span>();</span><br><span class="line">        properties.putAll(env);</span><br><span class="line">        BeanUtil.copyProperties(properties, config);</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">private</span> <span class="keyword">void</span> <span class="title function_">loadFromConfigFile</span><span class="params">()</span> &#123;</span><br><span class="line">        <span class="type">InputStream</span> <span class="variable">stream</span> <span class="operator">=</span> ConfigLoader.class.getClassLoader().getResourceAsStream(CONFIG_FILE);</span><br><span class="line">        <span class="keyword">if</span> (Objects.nonNull(stream)) &#123;</span><br><span class="line">            <span class="type">Properties</span> <span class="variable">properties</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Properties</span>();</span><br><span class="line">            <span class="keyword">try</span> &#123;</span><br><span class="line">                properties.load(stream);</span><br><span class="line">                BeanUtil.copyProperties(properties, config);</span><br><span class="line">            &#125; <span class="keyword">catch</span> (Exception e) &#123;</span><br><span class="line">                e.printStackTrace();</span><br><span class="line"></span><br><span class="line">            &#125; <span class="keyword">finally</span> &#123;</span><br><span class="line">                <span class="keyword">try</span> &#123;</span><br><span class="line">                    stream.close();</span><br><span class="line">                &#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">                    <span class="keyword">throw</span> <span class="keyword">new</span> <span class="title class_">RuntimeException</span>(e);</span><br><span class="line">                &#125;</span><br><span class="line">            &#125;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h1 id="Component-Lifecycle"><a href="#Component-Lifecycle" class="headerlink" title="Component Lifecycle"></a>Component Lifecycle</h1><p>The lifecycle piece is as simple as defining an interface that provides methods for initialization, startup, and shutdown.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">LifeCycle</span> &#123;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">init</span><span class="params">()</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">start</span><span class="params">()</span>;</span><br><span class="line"></span><br><span class="line">    <span class="keyword">void</span> <span class="title function_">shutdown</span><span class="params">()</span>;</span><br><span class="line"></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Louis Vuitton&#39;s L Catterton strategically acquires health technology company Thorne Health tech</title>
    <link href="https://www.nablepart.com/2fb705e4f079/"/>
    <id>https://www.nablepart.com/2fb705e4f079/</id>
    <published>2023-11-05T04:18:10.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>Recently, L Catterton, a subsidiary of LV, announced that it will strategically acquire all the issued common stocks of Thorne Health tech, a health technology company, at a price of $10.20 per share and privatize it.<br>After the acquisition is completed, Thorne Health tech will be delisted from NASDAQ.<br>Thorne is an American company that provides personalized health testing and management services to consumers, as well as targeted healthcare products. It is worth mentioning that Thorne’s AI-driven technology platform,<br>Onegevity, provides actionable insights and personalized data, products, and services to help individuals actively manage and maintain their health.<br>The company went public on NASDAQ in 2021 and has been invested in by the Japanese consortium Kirin Holdings Co., Ltd. and Mitsui &amp; Co., Ltd. Before delisting, Thorne Health tech had a total share capital of 53.92 million<br>shares. Roughly calculated, L Catterton’s acquisition amount is $550 million, approximately 4 billion yuan.</p><h2 id="Who-is-Thorne-Health-tech"><a href="#Who-is-Thorne-Health-tech" class="headerlink" title="Who is Thorne Health tech?"></a>Who is Thorne Health tech?</h2><p>Thorne Health tech was founded in 1984 and is a health technology company that specializes in developing customized health products and services for consumers using its Onegevity AI technology platform. Specifically, the company provides health testing to generate comprehensive, personalized molecular portraits for customers. Then, relying on the Onegevity technology platform, it analyzes and generates health management plans and health products.</p><p>The Onegevity technology platform is a multi-omics database that learns and understands the dynamic biological features of billions of human bodies to accurately describe users’ health status. This technology platform not only serves B2C consumers but also collaborates with B2B clients such as pharmacies, health professionals, and lifestyle companies to provide health testing and advice to B2B clients’ patients through third-party applications while saving data on clients’ own portals.</p><p>In addition, Thorne Health tech’s clients include biopharmaceutical companies that can use the Onegevity Discovery platform to assist in drug development.</p><p>So far, Thorne has over 5 million B2C customers, more than 46,000 healthcare professionals, thousands of professional athletes, and over 100 B2B clients of professional sports teams. In March of this year, the company was also selected for the Fast Company’s 2023 Global Most Innovative Companies list.</p><p>Thomas Wilson, the executive in charge of investor relations, said in an interview on March 31 of this year that Thorne’s growth rate in 2022 is 10 to 15 times the industry average. The company plans to achieve net sales of $280 million to $290 million in 2023, an increase of 22% to 27% compared to 2022. By the end of 2023, the company will also complete the construction of a new factory for health products and is expected to become one of the top five health product manufacturers in the United States.<br>Although Thorne’s growth is impressive, its performance in the stock market is lackluster. On September 24, 2021, Thorne Healthtech went public on the NASDAQ in the United States. The planned offering price was between $13 and $15, but the actual offering price was $10, raising a total of $70 million. However, on the first day of trading, the stock price fell below the offering price, closing at $7.55, a 24.5% drop.</p><p>For the next year, the stock price continued to decline, falling to $3.41 in November 2022. The stock price was only around $4.72 in April of this year and only gradually began to recover in July. So this time, L Catterton offered a price of $10.2 per share, surpassing Thorne’s highest price after going public.</p><p>It is worth mentioning that when the company went public, it received investments from two Japanese financial groups: Kirin Holdings Co., Ltd. and Mitsui &amp; Co., Ltd. Kirin Holdings is a major company engaged in food and beverage businesses in Japan, as well as selling health products and drugs. Mitsui &amp; Co., Ltd. is one of Japan’s largest general trading companies. Prior to this, Thorne also mentioned in its public information that it had established a joint venture with Mitsui &amp; Co., Ltd. in Asia and established the first retail outlet in Asia in Singapore.</p><p>On October 17 of this year, Thorne HealthTech disclosed an internal transaction by a company insider. Kirin Holdings Co., Ltd., a shareholder holding more than 10% of the shares, sold 15.6742 million shares at a price of $10.2 on October 12, 2023, with L Catterton as the counterparty.</p><h2 id="Lu-Weikai-Teng-invested-in-China"><a href="#Lu-Weikai-Teng-invested-in-China" class="headerlink" title="Lu Weikai Teng, invested in China."></a>Lu Weikai Teng, invested in China.</h2><p>Presumably, L Catterton believes that Thorne is a hidden gem whose stock performance does not reflect its actual value, which is why they are willing to acquire the company at a high price. “As investors focused on the consumer sector, we closely follow the industry trends in the field of health and wellness and understand the increasing importance of healthy lifestyles for consumers,” said Marc Magliacano, Co-Managing Partner of L Catterton’s flagship fund. “By integrating global resources and strategic planning, we can further realize Thorne’s vision: to provide clinically validated nutritional solutions to global customers through professional health testing and personalized healthcare products.”<br>L Catterton’s history can be traced back to 1989 when Carl Frechette and his accountant Frank Vest, along with Chinese-American investor J. Michael Chu and former US Treasury Secretary William Simon, jointly created Catterton-Simon Partners. In the following 27 years, Catterton invested in many well-known cases in North and Latin America. However, the real turning point did not come until 2016 when Catterton, LVMH, and Bernard Arnault’s family holding company Financière Agache joined forces to create L Catterton. It was formed by merging Catterton’s private equity investment business in North and Latin America with LVMH and Financière Agache’s previous private equity investment and real estate businesses in Europe and Asia. Since then, L Catterton has become a global investment institution backed by the largest luxury brand LVHM. According to official disclosures, L Catterton currently manages approximately $34 billion in funds and has more than 200 investors worldwide, with offices in 17 locations globally. From 1989 to the present, they have made more than 250 investments, including well-known enterprises such as German centennial shoe brand Birkenstock, high-end trendy eyewear brand Gentle Monster, British custom pet fresh food brand Butternut Box, US lightweight camping trailer manufacturer Taxa Outdoors, sustainable home textile product manufacturer and retailer Boll&amp;Branch, Danish fashion designer brand GANNI, and top Italian bicycle brand Pinarello.</p><p>Of course, China is also an important investment market for L Catterton. As early as 2012, L Catterton invested 300 million yuan in the Chinese local cosmetics brand, Perfect Diary. Perfect Diary successfully went public in July 2019, and L Catterton cashed out about 800 million yuan after the lock-up period expired.</p><p>Since then, L Catterton has accelerated its layout in China. In 2021, it participated in the 500 million US dollar strategic investment of YQS and the D-round financing of Heytea. In 2022, it invested in the pet brand Berna Pure and participated in the B-round investment of the high-end pet food brand Pat with “raw meat and bone formula”. In September of this year, it joined hands with Lyon Capital to complete a 200 million yuan B-round investment in the synthetic biology start-up company Chuangjian Medical. In today’s cooling trend of consumer investment, L Catterton undoubtedly brings a bright spot to the domestic consumer track.</p><p>However, the most eye-catching thing is that in October last year, L Catterton established the first RMB fund in Chengdu High-tech Zone, with a scale of more than 2 billion yuan. Chengdu is the consumption capital of western China, so it is not surprising to invest in L Catterton. Chengdu has the third-highest number of LV stores in the country, second only to Beijing and Shanghai. Even earlier media reports showed that the “LV Home” in Chengdu Taikoo Li had a monthly sales record of 350 million yuan and a quarterly sales record of 930 million yuan, ranking first among single stores in the country.</p><p>In mid-October of this year, at the “2023 World Cultural City Global Roadshow” held in Chengdu, Chang Shuai, Vice President of L Catterton Global Opportunities, said: “As an emerging fashion capital of China, Chengdu not only has a distinct and diverse aesthetic, but also has a natural love for lifestyle among consumers. They are happy to express their attitudes towards life through eating, drinking, and playing.” Chengdu, known for its lifestyle, naturally partners with the world’s largest consumer PE.</p><p>Finally, let’s go back to Thorne. Thorne’s management team disclosed that in 2022, they expanded their e-commerce capabilities in China through multiple platforms, especially on TikTok, where they ranked in the top ten for several months. They will continue to expand related business in 2023. With the support of L Catterton, entering the Chinese market offline is also something worth looking forward to.</p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;Recently, L Catterton, a subsidiary of LV, announced that it will strategically acquire all the issued common stocks of Thorne Health tec</summary>
      
    
    
    
    <category term="Finance" scheme="https://www.nablepart.com/categories/Finance/"/>
    
    
    <category term="Investment" scheme="https://www.nablepart.com/tags/Investment/"/>
    
    <category term="Acquisition" scheme="https://www.nablepart.com/tags/Acquisition/"/>
    
    <category term="Luxury AI" scheme="https://www.nablepart.com/tags/Luxury-AI/"/>
    
    <category term="Health tech" scheme="https://www.nablepart.com/tags/Health-tech/"/>
    
    <category term="Fund" scheme="https://www.nablepart.com/tags/Fund/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 How to design a stable gateway</title>
    <link href="https://www.nablepart.com/00e51e05af06/"/>
    <id>https://www.nablepart.com/00e51e05af06/</id>
    <published>2023-11-05T04:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>This article does not have a specific business implementation, but only for how to design a highly available, stable gateway to list some of the thinking. If you are not interested in the concepts, you can just skip this article.</p><h1 id="High-Availability-Analysis"><a href="#High-Availability-Analysis" class="headerlink" title="High Availability Analysis"></a>High Availability Analysis</h1><p>High Availability (HA) is the ability of a system or service to maintain continuous operation and availability in the face of a variety of failures, errors, or adverse conditions. High availability is an important goal in the design of computer systems, networks, and applications to reduce system downtime and ensure business continuity to meet the needs of users and customers. Why do we need high availability we can cut through the following points:</p><ul><li>Business Continuity: In today’s digital world, many organizations rely on computer systems and networks to support their core business. Downtime of systems can lead to business interruption, damage to reputation, financial loss, and loss of customer trust. High availability ensures that systems are continuously available and helps maintain business continuity.</li><li>:: User experience: users expect to be able to access applications and services anytime, anywhere. If systems are frequently unavailable or have long response times, the user experience will suffer and users may look for alternatives. High availability improves user experience and increases user satisfaction.</li><li>:: Fault tolerance: Hardware failures, software bugs, natural disasters, or human error can lead to system outages. A high-availability design allows the system to tolerate these failures and automatically switch to an alternate device or data center to maintain continuity.</li><li>:: Data protection: Data is a critical asset of an organization, and loss or corruption of data can cause significant damage to operations. High-availability systems often employ data redundancy and backup strategies to ensure data integrity and availability.</li></ul><p>The gateway, as the first gateway for requests, is one of the most frequently used services in the project, so the availability of the gateway greatly determines the reliability of our services, and to a certain extent affects the user experience. Therefore, for the design of the gateway, we need to ensure its high availability as much as possible, and for high availability, and can be realized from the following points of entry:</p><ul><li>Hardware redundancy: Use redundant components, such as dual power supplies, hot-swappable hardware, disk arrays, etc., to reduce the impact of hardware failure on the system.</li><li>Software redundancy: Use load balancing and failover mechanisms to ensure that applications and services can run on multiple servers and that traffic can be switched to another server when one fails.</li><li>Data backup and recovery: Implement a regular data backup strategy to ensure data security and availability. Backup data can be used to restore the system in the event of a disaster.</li><li>Monitoring and automation: Use monitoring tools to monitor system performance and availability in real time. Automating system management and failback operations can help reduce system downtime.</li><li>Multi-data center deployments: Use data center deployments across multiple geographic locations to increase the fault tolerance of the system so that even if one data center fails, the others can continue to provide service.</li></ul><p>What is better understood here is software redundancy, which is the clustered approach to deploying our services. This way, assuming that our service has a 90% probability of uptime, if we have a cluster of three service instances, the high availability is: 1- 0.1 * 0.1 * 0.1 &#x3D; 99.9%.</p><h2 id="Software-Architecture"><a href="#Software-Architecture" class="headerlink" title="Software Architecture"></a>Software Architecture</h2><p>So for one of our services, the service deployment might look like this, the following is the deployment architecture<img src="https://s2.loli.net/2023/11/05/HJiaRqYv93sUy6X.webp"> In order to further realize high availability, we also need to solve the case of manual switching of the crashed service manually, that is, to realize the hot-switching of the master and the backup, and to realize the switching automatically when the service fails. We can determine whether the service is alive through regular heartbeat detection. If the service is not alive, then we will remove the service from the list of registries and so on, so that it will not be accessed for the time being, and then we carry out repair and maintenance of the downed service, and restart the project.</p><h2 id="Heartbeat-detection"><a href="#Heartbeat-detection" class="headerlink" title="Heartbeat detection"></a>Heartbeat detection</h2><p><img src="https://s2.loli.net/2023/11/05/4kvJcrT7BsVGodh.webp"></p><h2 id="Auto-recovery"><a href="#Auto-recovery" class="headerlink" title="Auto-recovery"></a>Auto-recovery</h2><p>Similarly, when our service has already had a problem that caused it to go offline, and we want the service to start automatically, rather than having us manually start it again, we can make the service start automatically a certain number of times through scripts, etc., and then we’ll manually intervene if that doesn’t work.</p><h2 id="Fuse-Degradation"><a href="#Fuse-Degradation" class="headerlink" title="Fuse Degradation"></a>Fuse Degradation</h2><p>At the same time, in order to further realize the high availability of the gateway, we also need to realize the meltdown function. Here we can refer to the realization of the idea of hystrix. hystrix provides two concepts, namely, fusion and degradation, which is also a problem I encountered during my interview with Xiaomi. Never confuse these two concepts. Circuit Breaker is used to prevent constant requests for a failed service in a distributed system, it stops requests for the failed service for a period of time to allow the service time to recover. This helps to prevent the system from running out of resources due to constant requests for faulty services and protects the availability of the system.。 <img src="https://s2.loli.net/2023/11/05/gGO2cJjusVHv4Fi.webp"></p><p>Fallback is another key feature of Hystrix that allows you to define alternate operations or responses to be provided to the user or system in the event that the primary operation (typically an invocation of an external service) fails or times out. A degraded operation is typically a simplified, non-failure operation that ensures that the system is still able to provide some level of service in the event that the primary operation fails. By degrading, you can provide a better user experience and provide useful information or services even if the primary operation fails.</p><h2 id="Interface-retries"><a href="#Interface-retries" class="headerlink" title="Interface retries"></a>Interface retries</h2><p>It’s not that failing once might not really work, we can retry a certain number of times.</p><h2 id="Isolation"><a href="#Isolation" class="headerlink" title="Isolation"></a>Isolation</h2><p>Isolation means splitting the system or resources. System isolation is to limit the propagation range and influence range when a system failure occurs, i.e., there will not be a snowball effect after the failure, so as to ensure that only the service that has gone wrong is unavailable, while the other services are still available.</p><ul><li>:: Network security: by placing gateways between different networks, it is possible to restrict potential threats, malicious traffic or unauthorized access to protected networks. This helps reduce the risk of network attacks such as intrusions, malware propagation and data leakage.</li><li>:: Access control: Gateway isolation can be used to manage network access to ensure that only authorized users or devices are able to access specific network resources. This helps protect sensitive data and systems from unauthorized access.</li><li>Network performance: Network performance and traffic management can be improved by separating networks. For example, different departments or applications can be located in different subnets to prevent traffic conflicts between them, thus improving overall network efficiency.</li><li>Isolated Problem Troubleshooting: Dividing the network into smaller areas makes it easier to identify and resolve network problems. If the problem occurs in a specific subnet, administrators can more quickly determine the root cause of the problem without affecting the entire network.</li></ul><p>There are also more ways to isolate, thread isolation, process isolation, cluster isolation, room isolation, read&#x2F;write isolation, fast&#x2F;slow isolation, motion isolation, and crawler isolation. We commonly use ThreadLocal is thread isolation, we also mainly analyze and explain this point. For example, our services can be divided into core services and non-core services, then we can isolate the two services above the thread pool. That is, the core thread pool and non-core thread pool to let them to provide thread support respectively.</p><h2 id="Pressure-Testing-and-Profiling"><a href="#Pressure-Testing-and-Profiling" class="headerlink" title="Pressure Testing and Profiling"></a>Pressure Testing and Profiling</h2><blockquote><p>“Pressure testing” is the practice of load testing a system, application, or network to determine its performance, stability, and reliability under varying load conditions. Pressure testing is usually performed by simulating a large number of users or requests to assess the performance limits of a system to ensure that the system will meet performance requirements in real-world use. Pressure testing can help identify performance bottlenecks, memory leaks, response time issues, and other performance-related problems in a system. Such tests are performed prior to development, testing, and production deployment to ensure that the system will function properly under varying workload conditions.<br>A “pro-forma” (also known as an emergency response plan or crisis management plan) is a document or plan designed to set out actions and procedures to be taken in the event of an emergency. It includes ways to respond to different types of emergencies or crises in order to minimize damage, ensure the safety of employees and customers, and restore normal business operations. A plan usually includes how to respond to various emergencies such as fires, natural disasters, cyber-attacks, data breaches, supply chain disruptions, and so on. The plans are usually prepared by the organization’s emergency management team or crisis management team and need to be updated and tested regularly to ensure their effectiveness.<br>In the technical realm, stress testing and preplans can also be related. While a stress test can help an organization determine how well its systems will perform in response to high loads or emergencies, a preplan can specify action steps to be taken in the event of a technology failure, cyberattack, or other technology-related crisis situation. These actions may include system maintenance, restoration of backup data, notification of relevant parties, or other technical measures. As such, both stress testing and preplanning are important tools and plans to ensure that systems and organizations can be operated and managed properly in a variety of situations.</p></blockquote><h2 id="Multi-Room-Disaster-Recovery-and-Dual-Activity-Data-Centers"><a href="#Multi-Room-Disaster-Recovery-and-Dual-Activity-Data-Centers" class="headerlink" title="Multi-Room Disaster Recovery and Dual-Activity Data Centers"></a>Multi-Room Disaster Recovery and Dual-Activity Data Centers</h2><p>Multi-Room Disaster Recovery (DR) and Dual-Active Data Centers are two strategies used to improve the high availability of systems and applications. They combine multiple data centers or server rooms to guarantee continuous availability of services in the event of a failure or disaster. Here’s how they guarantee high availability:</p><p>Multi-room disaster recovery (Disaster Recovery):</p><ul><li>Data redundancy: data is typically replicated to different server rooms to ensure availability for data backup and recovery. This can be accomplished through methods such as database replication, backups, and off-site storage.</li><li>Automatic failover: Multi-room disaster recovery typically uses an automatic failover system, which automatically redirects traffic to the secondary data center when one data center fails to maintain service continuity.</li><li>Hot standby: Standby data centers are usually hot standby, meaning they are ready to receive traffic and run continuously to reduce switchover time.</li><li>Regular drills: The organization conducts regular disaster recovery drills to ensure that in the event of a disaster, personnel understand how to switch over to the secondary data center and that the system recovery process is working.</li></ul><p>Dual-Active Data Centers (Active-Active Data Centers):</p><ul><li>Load Balancing: Traffic is often distributed to different data centers to ensure load balancing. This can be achieved through load balancers and DNS resolution.</li><li>Data Synchronization: Data is synchronized in real-time across multiple data centers to ensure data consistency. This can be achieved through mechanisms such as database replication, file synchronization, and message queuing.</li><li>Redundant Infrastructure: Dual-living data centers typically have redundant infrastructure for networks, storage, servers, etc. to ensure that in the event of a failure in one data center, traffic can be seamlessly switched to the other data center.</li><li>Automatic failover: When one data center fails, the load balancer will usually automatically switch traffic to the other data center to ensure service continuity.</li><li>Performance monitoring: A real-time performance monitoring system can help detect failures and problems and trigger automatic failover.</li></ul><p>One important solution for multi-computer room disaster recovery here is: two locations and three centers, co-location disaster recovery and off-site disaster recovery. Two locations and three centers means: production center, co-location disaster recovery center, and off-site disaster recovery center. Here the off-site disaster recovery needs to consider the reasonable data replication&#x2F;backup technology. For this data backup solution, it needs to be combined with the specific communication time and the maximum tolerable recovery time for users.</p><p>Dual-activity data centers are not quite the same as disaster recovery and backup centers. The former is always in working condition, while the latter is used in case of failure. Of course, dual-active data centers are difficult to implement, and there is no specific implementation plan in China, but you can store this knowledge in your head.</p><h1 id="Exception-Handling-Mechanism"><a href="#Exception-Handling-Mechanism" class="headerlink" title="Exception Handling Mechanism"></a>Exception Handling Mechanism</h1><p>Exception handling mechanisms can help us locate problems and analyze them statistically. There are usually two types of exception mechanisms, divided into human and machine. People: users, operations, development, etc. Machines: processing according to the program status code There are many ways to warn of exceptions, which can be handled according to the needs of the team. For example, simply send an email, or the company’s internal IM system for notification. It is also possible to use phone calls and SMS to notify in a safer way. The gateway, as the portal for requests, also needs to handle exceptions. The gateway needs to do the following exception handling:</p><ul><li>Uniform management of exception codes</li><li>Exception handling for collection, specific business should focus on the business itself</li><li>The message corresponding to the exception status code should be handled by the gateway in a unified way</li></ul><p>And for exception handling, we can use high fault-tolerant retries to handle it. Retrying is a means to improve the fault tolerance of the system.</p><h2 id="Retry"><a href="#Retry" class="headerlink" title="Retry"></a>Retry</h2><p>Retries have some advantages and disadvantages. Some of them are that they are easy to implement, and if the problem is due to a network problem, retrying after a quick network recovery is the best solution. The downside, of course, is that retrying brings double the amount of traffic to the service system, which may result in a direct crash of the service. The solution is to actively calculate the success rate of the service, the success rate is too low, then directly do not retry. The success rate here can actually be calculated using counting and other methods.</p><h2 id="Automatic-switching-between-primary-and-secondary-services"><a href="#Automatic-switching-between-primary-and-secondary-services" class="headerlink" title="Automatic switching between primary and secondary services"></a>Automatic switching between primary and secondary services</h2><p>There may be some problems with single service retries. So can we consider proactively switching between master and backup services? That is, when a request to service A fails, we actively switch and send the request to service B (the backup), so that it does not cause a double hit on A. But this switching, as well as retrying, leads to a degradation of the user experience because the response time increases. Also, if the primary service can’t handle the traffic because it’s too heavy, there’s a high probability that the backup service can’t handle it either. At the same time, most of the time we are using the main service, then the backup service there is a waste of resources. Of course, if the backup is for core scenarios such as order payment, the backup is still very important.</p><h2 id="Dynamically-cull-or-recover-abnormal-machines"><a href="#Dynamically-cull-or-recover-abnormal-machines" class="headerlink" title="Dynamically cull or recover abnormal machines"></a>Dynamically cull or recover abnormal machines</h2><p>All Services and Storage, Stateless Routing [Difference between stateless and stateful routing](<a href="https://link.juejin.cn/?target=https://blog.csdn.net/Zhangsama1/article/details%25">https://link.juejin.cn?target=https%3A%2F%2Fblog.csdn.net%2FZhangsama1%2Farticle%2Fdetails%</a> 2F134222718%3Fspm%3D1001.2014.3001.5502 “<a href="https://blog.csdn.net/Zhangsama1/article/details/134222718?spm=1001.2014.3001.5502">https://blog.csdn.net/Zhangsama1/article/details/134222718?spm=1001.2014.3001.5502</a>“) This is done like this The benefit is to avoid a single point of risk where one service hangs and brings the whole cluster down. And since our log monitoring and other systems are generally not particularly large in number, using them isn’t inherently completely reliable. So we use stateless routing here, directly distributing requests randomly to the background services.</p><p>Support for parallel expansion. Encountering heavy traffic scenarios, support for adding machines to expand capacity The way this is realized is that when we monitor heavy traffic, or CPU spikes, we can expand capacity through the function set up earlier.</p><p>Automatic Rejection of Abnormal Machines After our routing service finds that the anomaly rate of requests for a service reaches the value we set, we will reject the service. We will then send out some trial requests, and if it passes the trial, we will recover the service.</p><h2 id="Timeout-considerations"><a href="#Timeout-considerations" class="headerlink" title="Timeout considerations"></a>Timeout considerations</h2><p>In the traditional case, we usually set a timeout period. If the timeout is set too long, it will cause our worker threads to be occupied by blocking for a long time, and over time, more and more threads will not be available, and service failure will occur. If the setting is too short, it will lead to a direct timeout without processing the request, resulting in a waste of resources and although the processing is successful, but the return is a processing failure. So we will set a timeout for all services according to a uniform way is certainly not reasonable, so we need to consider some strategies to solve this kind of problem. Currently I understand the direction of the solution are: 1: fast and slow separation Literally, we can be in accordance with the services need to deal with different time, separate deployment, separate set their timeout.</p><p>2: solving synchronization blocking waiting For some blocking requests, such as downloading IO resources and other ways, it will lead to a long time in the system in the waiting state of resources, can not respond to new requests, it will cause us a certain waste of resources. Here our solution can be solved by using IO multiplexing through asynchronous callbacks. In fact, I designed the gateway itself to use CompletableFuture way to deal with, that is, asynchronous processing. Here asynchronous processing only improves the throughput rate of the system, but the user experience is still not prompted. In fact, you can continue to understand the concept of “concurrent”. Is a similar way to write synchronous code to achieve asynchronous callback mechanism (the company’s work requirements to go).</p><p>3: anti-re-entry is also known as idempotence processing. Because if your request waits a long time or not processed, the user is likely to click twice, then it will cause the request to re-enter, so we must do a good job of idempotence. The solution is distributed Redis, locks, re-entrant locks, unique order numbers with bitmap and so on.</p><h2 id="Service-design"><a href="#Service-design" class="headerlink" title="Service design"></a>Service design</h2><p>Service degradation, automatic shielding of non-core branch exceptions 1: For the service core link, we have to focus on monitoring, and set a longer timeout time 2: For the service non-core environment, we set a shorter timeout time</p><p>Service decoupling, physical isolation 1: service separation, large services are split into small services 2: light and heavy separation, physical isolation</p><p>Fault tolerance at the business level, human error is unavoidable. Even with a perfect monitoring system, there is no guarantee that there will be no errors, so one needs to solve this problem at the system level. For example, we can solve the problem from the top of the business, for the wrong configuration information, directly report errors, which requires the development of a set of intelligent configuration detection system for different services.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>Designing a gateway from 0 to 1 What is a gateway And why do you need to develop your own gateway</title>
    <link href="https://www.nablepart.com/20eb551d5ad9/"/>
    <id>https://www.nablepart.com/20eb551d5ad9/</id>
    <published>2023-11-05T03:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>Along the way a lot of epiphanies, right, the first internship in January this year, in October this year into the byte, from no name, to the whole network 3w + fans, there have been dark moments, but when in the trough, how to go up, I hope to see the readers of this article have the opportunity to enter the company of their dreams.</p><p><img src="https://s2.loli.net/2023/11/05/VLNKsy8tlAWuSdF.webp"> This article, as the first article of my gateway, does not involve any code, but just mentions the role of the gateway and why I want to develop my own gateway, and the subsequent articles will be updated continuously. If you need, please watch the demo effect video to get the corresponding full information.</p><p><a href="https://link.juejin.cn/?target=https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source%25">Link to demo video</a> 3D1d4d63e205b3ad352b4771f87295d16d%23reply747752344 “<a href="https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=">https://www.bilibili.com/video/BV1eC4y1n73c/?vd_source=</a> 1d4d63e205b3ad352b4771f87295d16d#reply747752344”)</p><h1 id="What-is-a-Gateway"><a href="#What-is-a-Gateway" class="headerlink" title="What is a Gateway?"></a>What is a Gateway?</h1><p>A Gateway is an important device in a computer network that is used to connect different networks, protocols or communication systems so that they can communicate and exchange data with each other. The main function of a gateway is to pass data between different networks and ensure that the data is routed and transformed correctly so that devices in different networks can understand and communicate with each other. Following are some of the important concepts and functions of a gateway:</p><p>Connecting different networks: Gateways are usually used to connect different network types, such as Local Area Network (LAN) and Wide Area Network (WAN), Ethernet and wireless networks, IPv4 and IPv6, etc., in order to transfer data and communicate between different networks.</p><p>Data Conversion and Protocol Translation: Gateways can convert data from one network protocol to another to ensure that devices in different networks can communicate with each other. This usually involves converting data from the format of one protocol to that of another, such as converting data from the TCP&#x2F;IP protocol to the HTTP protocol.</p><p>Security and Fusion Flow Limiting: A gateway can also be used to enhance network security. It can perform firewall functions, monitor data traffic, filter malicious traffic, and enforce access control policies to protect the network from unauthorized access and network attacks.</p><p>Data routing: A gateway can determine the best path for packets to ensure that data is delivered efficiently from source to destination. This usually involves looking at the destination address and selecting the appropriate output interface or next leap point to transmit the data.</p><p>Protocol translation: When communicating between different networks, it may be necessary to perform protocol translation to ensure that data is interpreted correctly. Gateways can perform such tasks to ensure that devices on different networks can communicate with each other.</p><h1 id="Gateway-types"><a href="#Gateway-types" class="headerlink" title="Gateway types"></a>Gateway types</h1><p><strong>RESTful API Gateway</strong></p><ul><li>RESTful API gateways are typically used to manage and provide access to RESTful APIs.</li><li>RESTful APIs are based on the HTTP protocol and are typically used for request-response model communication for operations such as fetching, creating, updating, and deleting resources.</li><li>RESTful API gateways provide routing, authentication, authorization, access control, logging, and monitoring to ensure the security and availability of the API.</li><li>RESTful API gateways typically do not support real-time two-way communication because HTTP is connectionless and does not apply to persistent connections.</li></ul><p>**WebSocket API Gateway</p><ul><li><p>The WebSocket API Gateway is specifically designed to handle real-time bi-directional communication with the WebSocket protocol.</p></li><li><p>The WebSocket protocol allows a long term bi-directional connection to be established between a client and a server in order to transfer data in real time for scenarios such as real-time chat, online gaming, real-time notifications, etc. * The WebSocket API Gateway is designed to handle real-time two-way communication.</p></li><li><p>WebSocket API Gateway provides WebSocket connection management, routing, load balancing, protocol upgrading and messaging.</p></li><li><p>WebSocket API Gateway supports persistent connections without creating new connections between each request like RESTful API.</p></li><li><h1 id="Pros-and-Cons-of-Gateways"><a href="#Pros-and-Cons-of-Gateways" class="headerlink" title="Pros and Cons of Gateways"></a>Pros and Cons of Gateways</h1><p>** Advantages**</p><ul><li>Simplifies client-side development: Gateways allow client applications to communicate with back-end services without having to concern themselves with complex network protocols and details. The client only needs to communicate with the gateway and does not have to interact directly with the back-end service. This greatly simplifies the work of the client and reduces the amount of technical details developers need to deal with. The client no longer needs to know the specific address or API endpoint of the back-end service. It only needs to know how to communicate with the gateway, which is responsible for routing requests to the appropriate back-end service.</li><li>:: Reduced coupling: Using a gateway as an intermediate layer separates different parts of the system and reduces the tight coupling between them. This helps to improve the maintainability and scalability of the system. Instead of spreading these functions across each individual service, gateways can handle shared functions such as authentication, authorization, and data conversion. This reduces redundancy and duplication of functionality and makes it easier for developers to maintain and update these functions.</li><li>:: Reduced duplication of wheel-building: By abstracting shared functionality into a gateway, developers can focus more of their efforts on the development of business logic. They don’t have to implement the same functionality, such as authentication or request validation, over and over again for each service. The gateway can also provide features such as performance optimization, load balancing, and caching, which reduces the burden on developers and allows them to focus more on implementing the business logic.</li></ul><p><strong>Disadvantages</strong></p><ul><li>Possible performance bottlenecks Microservices architectures are designed to increase flexibility and scalability by breaking large applications into small, relatively independent services. However, when a gateway is used, it can become a performance bottleneck point, especially in high load situations. Gateways typically have to handle a large number of requests and responses, including tasks such as request routing, authentication, authorization, data transformation, and so on. If the gateway is not properly optimized and scaled, it can limit the performance of the entire system.</li><li>Request Blocking Problem The microservices architecture encourages the use of asynchronous communication and non-blocking IO so that services can process requests in parallel and improve response performance. However, some gateways may rely on synchronous communication models, which can lead to performance degradation If the gateway performs operations that need to wait for a response from an external service and does not use an asynchronous model, the gateway may become a blocking point in the entire request-response chain, affecting the performance and response time of the system.</li><li>High coupling If the coupling between the gateway and the microservices is too high, it may limit the system’s scalability and ability to be deployed independently. For example, if multiple microservices are highly coupled to a particular gateway, changing that gateway may require modifying multiple services, which violates the principles of microservices architecture. High coupling may also lead to collaboration issues between development teams, as they must coordinate additional changes to accommodate changes to the gateway.</li></ul></li></ul><h1 id="What-are-the-current-gateway-solutions"><a href="#What-are-the-current-gateway-solutions" class="headerlink" title="What are the current gateway solutions?"></a>What are the current gateway solutions?</h1><p><strong>Nginx:</strong> C&#x2F;Lua based Benefits: High performance for large scale applications and high traffic loads. Strong extensibility with support for Lua scripts and custom plugins. Versatile, not limited to API gateways, but can also be used as a reverse proxy and load balancer. Cons: Configuration and management requires technical expertise, not user-friendly enough. Advanced features require extensions such as OpenResty</p><p><strong>Kong:</strong> Based on Lua Pros: Easy to extend, provides a rich plugin ecosystem. Supports microservice architecture for complex deployments. Strong integration, can be integrated with other services such as databases and message queues. Cons: Requires management and maintenance, especially in large-scale environments. Some advanced features may require writing custom plugins.</p><p><strong>Apigee:</strong> Pros: Cloud-hosted, eliminating the need to manage your own infrastructure. Provides comprehensive API management capabilities, including analytics, monitoring, and security. Highly scalable for large-scale applications. Cons: Some learning curve, requires knowledge of Apigee configuration and setup. Can be more expensive, especially for small-scale projects.</p><p><strong>AWS API Gateway:</strong> Pros: Seamlessly integrates with the AWS ecosystem, providing high availability and scalability. Simplifies API deployment and management. Can utilize other AWS services such as Lambda functions. Cons: Locked dependencies to AWS cloud, not applicable to hybrid cloud environments. May require knowledge of AWS-specific configurations and settings.</p><p><strong>Istio:</strong> Pros: Provides rich traffic management and security features for microservices. Supports multi-cloud and hybrid cloud environments, not limited to specific cloud providers. Integrates with monitoring tools such as Prometheus and Jaeger. Disadvantages: Higher complexity, takes time to learn and deploy. May require extensive configuration, especially in large-scale environments.</p><p><strong>Spring Cloud Gateway:</strong> Pros: Integrates with the Spring Cloud ecosystem and supports Spring Boot applications. Lightweight and easy to deploy and manage. Supports dynamic routing and filters. Cons: Relatively low feature set, suitable for small to medium sized applications. May not be as suitable as other gateways for complex API management needs.</p><p><strong>Traefik:</strong> Pros: Designed for containerized applications and microservices, with support for Docker and Kubernetes. Auto-discovery of back-end services, dynamic configuration. Lightweight, easy to deploy and manage. Cons: Relatively few features, suitable for smaller applications. May require additional plugins to support advanced features.</p><p><strong>HAProxy:</strong> Pros: High-performance load balancer for high-load environments. Simple to configure and manage. Supports TCP and HTTP load balancing. Cons: Fewer advanced features, not suitable for complex API management needs. Lack of API gateway specific features such as authentication and authorization.</p><h1 id="Why-should-I-develop-my-own-Gateway"><a href="#Why-should-I-develop-my-own-Gateway" class="headerlink" title="Why should I develop my own Gateway?"></a>Why should I develop my own Gateway?</h1><p>Currently more popular gateways are SpringCloud Gateway, SpringCloud Zuul, they are better gateways. However, these gateways are developed based on Java language, and if we want to use this kind of gateway we must know Java, and if the company’s business is mostly go, such as the byte I work for, at present, go is used more, so using Java as a gateway is not very suitable. Second is to use these mature frameworks, because of its face to face, so it means that when we only need to use the gateway in part of the functionality of the direct use of these frameworks will make the business project is too large. Therefore, it is better to consider developing our own gateway, where we only need to implement the more important features that we need, and developing our own gateway also provides us with greater customization capabilities of the gateway.</p><h1 id="What-do-I-need-to-look-for-in-a-homegrown-gateway"><a href="#What-do-I-need-to-look-for-in-a-homegrown-gateway" class="headerlink" title="What do I need to look for in a homegrown gateway?"></a>What do I need to look for in a homegrown gateway?</h1><p>1: Scalability. We need to make sure that our gateway is highly scalable, because we need to consider not only the current business requirements, but also the future business requirements. 2: Reasonable architecture configuration. My self-developed gateway will be developed using the domain-driven model DDD. If you don’t know about DDD, you can learn about its advantages and disadvantages. 3: interface compatibility: we know that the gateway will generally use the configuration center or registration center, the current mainstream Apollo and Nacos, so we also need to provide strong compatibility, so that in the different configuration centers for switching time to time. 4: three high design: as the first hurdle of the project, the gateway needs to carry far more requests than the background specific services, so the performance of the gateway is a very in-depth consideration and design aspects. Involving JVM tuning, code performance optimization and so on.</p>]]></content>
    
    
    <summary type="html">Self-developed a gateway that helped me successfully land a big factory. This is a complete set of my complete design out of a gateway from 0 to 1, the information contains the thinking process, flow charts, source code and other kinds of information.</summary>
    
    
    
    <category term="java" scheme="https://www.nablepart.com/categories/java/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
    <category term="Self-developed" scheme="https://www.nablepart.com/tags/Self-developed/"/>
    
    <category term="gateway" scheme="https://www.nablepart.com/tags/gateway/"/>
    
    <category term="big factory" scheme="https://www.nablepart.com/tags/big-factory/"/>
    
    <category term="thinking" scheme="https://www.nablepart.com/tags/thinking/"/>
    
    <category term="process" scheme="https://www.nablepart.com/tags/process/"/>
    
  </entry>
  
  <entry>
    <title>3.5k Star! Build your own development toolkit - It-tools in a minute!</title>
    <link href="https://www.nablepart.com/5167a89bae7b/"/>
    <id>https://www.nablepart.com/5167a89bae7b/</id>
    <published>2023-11-04T16:12:00.000Z</published>
    <updated>2025-08-25T09:00:39.786Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>Webmaster toolkit should be used by many people in their work, such as JSON formatting, UUID generator, password generation, URL encoding, etc . The frequency of use of these tools is still quite high, I believe that many developers often use JSON formatting and URL encoding and so on.</p></blockquote><h2 id="Application-Overview"><a href="#Application-Overview" class="headerlink" title="Application Overview"></a>Application Overview</h2><p>IT-TOOLS is the highly regarded free and open source tool site project that provides a convenient collection of online tools for developers and IT professionals. The project is popular and admired for its small size and light weight, ease of deployment, as well as its powerful features and beautiful interface. The collection includes various utilities such as token generators, case converters, basic converters, hash text, UUID generation, QR code generators, and more. On GitHub, IT-TOOLS has even captured 3.5k stars.</p><h2 id="key-function"><a href="#key-function" class="headerlink" title="key function"></a>key function</h2><ol><li>Crypto tools, including token generation, hash text, UUID generation, encrypted and decrypted text, and 9 different functions.</li><li>Converter (conversion) tool class , including <code>Yaml</code> converter , <code>Json</code> converter , Base64 converter and so on 12 different functions .</li><li>Web (site) tools class, including Url format encoding and decoding, user agent parser, URL parser and other 15 different functions.Mages and Videos （图片与视频）工具类，包含SVG 占位符生成器，二维码生成器等3种功能。</li><li>Development tools, including Docker run to Docker compose converter, timed task generator, SQL beautification and formatting, and 10 different functions.</li><li>There are also Network tools, Math tools, Measurement tools, Text tools, Data tools and more!</li></ol><h2 id="Application-Features"><a href="#Application-Features" class="headerlink" title="Application Features"></a>Application Features</h2><h3 id="I、The-page-is-simple-and-elegant-and-supports-day-and-night-modes"><a href="#I、The-page-is-simple-and-elegant-and-supports-day-and-night-modes" class="headerlink" title="I、The page is simple and elegant, and supports day and night modes."></a>I、The page is simple and elegant, and supports day and night modes.</h3><p>IT-TOOLS has a simple yet elegant page design and supports both day and night modes, providing an excellent user experience. Users can choose the appropriate interface theme according to their environment and personal preference to reduce eye fatigue and improve usability.</p><p><img src="https://s2.loli.net/2023/11/04/Z1s7jcLng32trWw.webp" alt="image-20230922170019438"></p><h3 id="II、Crypto-Tools"><a href="#II、Crypto-Tools" class="headerlink" title="II、Crypto Tools"></a>II、Crypto Tools</h3><p>A cryptographic tool class (Crypto tool class) is a software tool used to process and manage cryptographic operations. These tools are usually used to protect sensitive data and ensure that it is not easily accessible to unauthorised visitors during storage and transmission. And IT-TOOLS support contains 9 different functions such as token generation, hash text, UUID generation, encrypting and decrypting text.</p><p><img src="https://s2.loli.net/2023/11/05/Gy5gP4NdLoXETep.webp" alt="image-20230922170040492"></p><h3 id="III、Converter-Tools"><a href="#III、Converter-Tools" class="headerlink" title="III、Converter Tools"></a>III、Converter Tools</h3><p>The Converter Tools class is a feature-rich tool.IT-TOOLS provides a variety of data format conversion functions, IT-TOOLS supports a total of 12 different functions, including Yaml converter, Json converter, Base64 converter, etc. These functions help to convert data between different formats flexibly and efficiently to meet the needs of different application scenarios. These functions help to convert data between different formats flexibly and efficiently to meet the needs of different application scenarios.</p><p><img src="https://s2.loli.net/2023/11/04/5oE1xiNKOWFfvwM.webp" alt="image-20230922170109156"></p><h3 id="IV、Web-Tools"><a href="#IV、Web-Tools" class="headerlink" title="IV、Web Tools"></a>IV、Web Tools</h3><p>The Web (Site) Tools category is a multi-functional toolset, IT-TOOLS offers 15 different functions including Url format encoding and decoding, user agent parser, URL resolver, and more. These features provide developers and webmasters with a wide range of tools for processing and managing a variety of tasks in web applications. Whether it is in processing URLs, parsing user agent information or performing URL resolution, the Web Tools class provides users with convenient and efficient solutions that help to optimise the functionality and performance of websites.</p><p><img src="https://s2.loli.net/2023/11/05/FlD3Z4Vw8XtxfWo.webp" alt="image-20230922170141061"></p><h3 id="V-Development-tools"><a href="#V-Development-tools" class="headerlink" title="V. Development tools"></a>V. Development tools</h3><p>Development工具类是一个多功能的工具集，涵盖了10种不同的功能，包括Docker run到Docker compose转换器、定时任务生成器、SQL美化与格式化等。这些功能为开发人员提供了广泛的工具，用于简化和加速应用程序的开发和部署过程。无论是在容器化应用、定时任务管理还是SQL代码处理方面，Development工具类都为用户提供了便捷、高效的解决方案，有助于提升开发效率和代码质量。</p><p><img src="https://s2.loli.net/2023/11/05/3vptAsWTDLq9Cra.webp" alt="image-20230922170215587"></p><h3 id="VI-Other-tool-categories"><a href="#VI-Other-tool-categories" class="headerlink" title="VI. Other tool categories"></a>VI. Other tool categories</h3><p>IT-TOOLS also has Network, Maths, Measurement, Text, Data and more. In order to meet the diverse needs of users.</p><p><img src="https://s2.loli.net/2023/11/05/LOoP9qn6pVC8UIk.webp" alt="image-20230922170251106"></p><h2 id="Installation-Guide"><a href="#Installation-Guide" class="headerlink" title="Installation Guide"></a>Installation Guide</h2><ol><li>Go to Cloud Native App Store</li><li>look for  it-tools</li><li>Go to details, select package type (supported by this app, docker install, ram install)</li><li>Click Install and execute the appropriate command. If you have any questions, please refer to the documentation or join the community!</li></ol><h2 id="About-Cloud-Native-Marketplace"><a href="#About-Cloud-Native-Marketplace" class="headerlink" title="About Cloud Native Marketplace"></a>About Cloud Native Marketplace</h2><p>Cloud Native App Market is an app marketplace that brings together all kinds of open source software. Not only can you use it as your own Helm Chart repository to provide a rich and diverse range of Helm apps, but there are also a wide range of options such as Docker apps, Rainbond app templates, and Xinchuang apps.</p><p>官网：<a href="https://link.juejin.cn/?target=https://hub.grapps.cn/">hub.grapps.cn&#x2F;</a></p>]]></content>
    
    
    <summary type="html">Webmaster toolkit should be used by many people in their work, such as JSON formatting, UUID generator, password generation, URL encoding, etc . The frequency of use of these tools is still quite high, I believe that many developers often use JSON formatting and URL encoding and so on.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="Star" scheme="https://www.nablepart.com/tags/Star/"/>
    
    <category term="Build" scheme="https://www.nablepart.com/tags/Build/"/>
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="different" scheme="https://www.nablepart.com/tags/different/"/>
    
    <category term="toolkit" scheme="https://www.nablepart.com/tags/toolkit/"/>
    
    <category term="It-tools" scheme="https://www.nablepart.com/tags/It-tools/"/>
    
    <category term="minute" scheme="https://www.nablepart.com/tags/minute/"/>
    
    <category term="Webmaster" scheme="https://www.nablepart.com/tags/Webmaster/"/>
    
  </entry>
  
  <entry>
    <title>Write a good technical proposal with these 17 diagrams</title>
    <link href="https://www.nablepart.com/f0006a2bfd23/"/>
    <id>https://www.nablepart.com/f0006a2bfd23/</id>
    <published>2023-11-04T16:11:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>The view from the side of a mountain is different from the view from near and far. In order to better understand the software system, we need to use a variety of charting tools, from different perspectives, a comprehensive understanding of the system design. So that in the design phase enough to fully predict the system bottlenecks, implementation difficulties, development time, etc., in the business function to achieve good scalability, performance, high reliability, high availability. Achieve absolute data security and excellent performance. Support rapid iteration of business requirements.</p><p>The software system can be layered as follows</p><ol><li>Iass: Infrastructure software, including operating systems (networking, storage, compute), virtual machines, Docker, and other basic software.</li><li>Paas Platform as a Service: This includes our everyday messaging, caching, database, and other middleware. It should also include frameworks and libraries. For example, ioc, orm, rpc frameworks; images, special file processing third-party libraries and so on.</li><li>Saas Software-as-a-Service: Some of our everyday applications, be it 2B, 2C fall under this category.</li></ol><p>In the following, we will focus on which diagramming tools can be used to analyse the application software system (Saas layer). These diagrams should be read by developers, product managers, business architects, system architects, technical administrators, and so on.</p><h1 id="use-case-diagram"><a href="#use-case-diagram" class="headerlink" title="use case diagram"></a>use case diagram</h1><p>Use case diagrams are the clearest and easiest to understand diagrams to use from the user’s point of view.</p><h2 id="element-e-g-in-array"><a href="#element-e-g-in-array" class="headerlink" title="element (e.g. in array)"></a>element (e.g. in array)</h2><ol><li>How to use our system. If the system has already been completed, a clean version of the user manual can be used instead.</li><li>What kinds of use processes are there and what are the application scenarios.</li><li>What needs to be done for each process。</li></ol><p>The use case diagram first needs to be analysed in terms of who uses the system, which can be analysed with the help of the following questions</p><ul><li>Who will use the main functions of the system.</li><li>Who will need the support of the system to do their job.</li><li>Who will need to maintain, manage the system, and keep it in working order.</li></ul><p>For example, a simple user profile change <img src="https://s2.loli.net/2023/11/05/WdFD1YG6LSJ8ATr.webp"></p><p>Use case diagrams are easily overlooked by system designers because of their simplicity and straightforwardness. In fact, they are the friendliest and most straightforward way for an uninitiated person to get a quick overview of what functionality our system provides to whom. What are the connections between the functions?</p><h2 id="use-case-statute"><a href="#use-case-statute" class="headerlink" title="use case statute"></a>use case statute</h2><p>The use case statute is a detailed description of the use case, which generally includes a brief description, main event flow, alternative event flow, pre-conditions, post-conditions and priorities.</p><p>The use case statute focuses on both success scenarios described in the main event flow and abnormal scenarios described in the alternative event flow, which is conducive to promoting systematic thinking, discovering abnormal scenarios, improving system functionality, and enhancing ease of use.</p><p>Postconditions should cover all possible end-of-use-case states. That is, postconditions should not only be the state after the use case ends successfully, but also include the state after the use case ends due to an error.</p><h2 id="typical-example"><a href="#typical-example" class="headerlink" title="typical example"></a>typical example</h2><p><img src="https://s2.loli.net/2023/11/05/8tF7BwZ94GlahqS.webp"> The general case can be described only for important requirements, the use case statute for critical use cases. General requirements can be ignored. (They should never be ignored in the PRD)</p><p>Use case diagrams and system pages A more detailed user manual would allow for a quicker and comprehensive understanding of the system’s functionality. For example, show our system page. It is more visual and clear.</p><p>Only with a better understanding of what functionality the system provides and what roles it has can you understand why the system is designed the way it is? Some people find use case diagrams superfluous because they know enough about the system itself. But they don’t realise that others are still completely new to the system. Use case diagrams are the most straightforward way to understand a system.</p><h1 id="Data-model-diagram"><a href="#Data-model-diagram" class="headerlink" title="Data model diagram"></a>Data model diagram</h1><p>Programmes &#x3D; data structures + algorithms, a software programme is the processing of input data to output specific data according to a certain algorithm. The data is the core of the programme, and it is also the part that is prone to change, such as the most common change: the need to add or subtract fields.</p><p>At this point the data needs to be modelled and the relationships between the models sorted out in order to put the most closely related data into a model that can be extended independently. A data model diagram describes the relationships between models and what fields are in each model. Three Elements ● Models ● Attributes ● Relationships between Models</p><p>Data model diagrams include E-R diagrams, database entity diagrams. etc.</p><ul><li>E-R diagrams are simpler using Chinese and do not involve table structures.</li><li>Database Model Layer: Describes the relationships and table design at the database level, which is more complex than an ER diagram, but more comprehensive and understandable to all people.。</li><li>Most of the time our design documents are for product and R&amp;D, technical managers. It is also readable using database entity diagrams. (Let him learn if he can’t read it)</li></ul><p>Personally, I think it is possible to ignore the E-R diagram in the design document and go straight to the database entity diagram. However, this requires the database entity diagram to have sufficient textual descriptions, such as attribute notes, relationship descriptions, etc.</p><h2 id="E-R-graphical-symbol"><a href="#E-R-graphical-symbol" class="headerlink" title="E-R graphical symbol"></a>E-R graphical symbol</h2><p><img src="https://s2.loli.net/2023/11/05/L6iMf4uOUhvIPcy.webp"></p><h1 id="Database-entity-diagram-database-ER-diagram"><a href="#Database-entity-diagram-database-ER-diagram" class="headerlink" title="Database entity diagram (database ER diagram)"></a>Database entity diagram (database ER diagram)</h1><p><img src="https://s2.loli.net/2023/11/05/YOg3s2bK8ewxBMD.webp"></p><h2 id="The-process-of-sorting-out-ER-diagrams-for-databases"><a href="#The-process-of-sorting-out-ER-diagrams-for-databases" class="headerlink" title="The process of sorting out ER diagrams for databases"></a>The process of sorting out ER diagrams for databases</h2><p>The design of database ER diagrams, entity diagrams or domain model diagrams is a test of design experience. It requires domain experts to communicate requirements based on use case diagrams, use case flowcharts, iterative requirements Constantly push the following questions</p><ol><li>Where the business is expanding, where it is changing, and where it plans to evolve in the future.</li><li>Where is the system expanding? How to achieve scalability</li><li>Which domain entities should be included.</li><li>Where should the boundaries of the domain model be. Are the associations 1V1, 1 to N, etc.?</li></ol><p>The process of analysing database ER diagrams can be done using design methods such as DDD.</p><h1 id="flow-chart"><a href="#flow-chart" class="headerlink" title="flow chart"></a>flow chart</h1><p>System Design Phase Only the core and critical business processes need to be covered. Some simple processes in the minutiae should be ignored. (There is energy and time to cover non-core processes)</p><h2 id="Management-Processes-and-User-Processes"><a href="#Management-Processes-and-User-Processes" class="headerlink" title="Management Processes and User Processes"></a>Management Processes and User Processes</h2><p>Take the marketing system as an example, it is divided into management process and user process.</p><ul><li>Operations creates a marketing campaign. This process is a management process. This process does not have high performance requirements. The traffic portal is different.</li><li>User orders and other behaviours, triggering certain marketing activities, this process for the performance requirements, security requirements are very high.</li></ul><p>Different processes have different focuses. For example, the dubbo rpc system can be divided into the initialisation process and the method invocation process.</p><ul><li>The rpc provider interface provider needs to register the interface at initialisation time. rpc consumer needs to listen to the interface at initialisation time.</li><li>rpc call flow From the consumer side to the provider is the call flow.</li></ul><h2 id="Flowcharting"><a href="#Flowcharting" class="headerlink" title="Flowcharting"></a>Flowcharting</h2><p><img src="https://s2.loli.net/2023/11/05/3Pytuj7nq5T6fMA.webp"> Flowcharts are more flexible in the way they are drawn. The following are personal experiences and habits</p><ul><li>Component Thinking: can describe control flow calls between components</li><li>Data Thinking: Can describe changes in data flow between data.</li></ul><p>Generally, boxes are used to represent components, and lines are used to represent calls to methods, actions, or data.</p><h2 id="Component-thinking"><a href="#Component-thinking" class="headerlink" title="Component thinking"></a>Component thinking</h2><p>Thinking in terms of components Design a flowchart. Requires that the components of the system be abstracted first, and which steps are handled by each component. It is a simplified version of the invocation timing diagram, with inputs and outputs diluted. (Timing diagram describes the method invocation hierarchy, which is more detailed and clearer.)</p><p>The following is an example of a control flow diagram of a component invocation that exposes a service on the dubbo Provider side.</p><p><img src="https://s2.loli.net/2023/11/05/57xthRkIDPHUezX.webp"></p><h2 id="data-mentality"><a href="#data-mentality" class="headerlink" title="data mentality"></a>data mentality</h2><p>A data flow diagram describes the transfer of data between components, and the data flow describes what the inputs and outputs of a component or system are.<img src="https://s2.loli.net/2023/11/05/YWrXO4DCod3qSf8.webp"> </p><p>In fact, in most business systems, the use of data flow diagrams is not very well delineated, because two or three models are mainly processed within a business process. The boundaries between the inputs and outputs of the components are not clear, and for this reason it is possible to combine the data flow diagram and the control chart into one. The boxes still represent the components, but the lines can include both actions and data.<img src="https://s2.loli.net/2023/11/05/S4kLOybHuoJWqgr.webp"></p><h2 id="Non-functional-design-in-flowcharting"><a href="#Non-functional-design-in-flowcharting" class="headerlink" title="Non-functional design in flowcharting"></a>Non-functional design in flowcharting</h2><p>High reliability, high availability, performance bottlenecks, flow charts can be introduced to the core read and write process of high availability, high reliability design; that is, how to ensure the reliability of the data, how to ensure the availability of the system. Which node in the process is the performance bottleneck. How to optimise and so on</p><p>以下仅供参考</p><ol><li>数据存在哪里，同步&#x2F;异步写入，异步写入的一致性保证</li><li>并发操作如何保证一致性（例如库存？）</li><li>高并发场景如何提高系统可用性，读流程如何优化，写流程如何优化</li><li>是否 Stand-by 设计双活。</li><li>幂等性，重试策略、负载均衡策略</li></ol><h1 id="时序图"><a href="#时序图" class="headerlink" title="时序图"></a>时序图</h1><p>A timing diagram is a more detailed system flowchart. A typical system flowchart covers key system components, key data processing nodes, but is not specific to any class or method. Timing diagrams require attention to the control flow of program execution, and the presentation of timing is more like a method call stack during system execution.</p><h2 id="Timing-diagram-elements"><a href="#Timing-diagram-elements" class="headerlink" title="Timing diagram elements"></a>Timing diagram elements</h2><ol><li>method call stack (core)</li><li>branching and looping description.</li><li>method and entry descriptions</li><li>annotations for the main key nodes</li></ol><h2 id="Timing-diagram-example"><a href="#Timing-diagram-example" class="headerlink" title="Timing diagram example"></a>Timing diagram example</h2><p><img src="https://s2.loli.net/2023/11/05/w1tkhcvZnMPUxQf.webp"></p><p>You can see that the timing diagram is accurate to the extent that a class calls a method of a class, showing the depth and level of nesting of method calls. Together with the important key node comments, the reader can see the overall method invocation system even without reading the code. Through the timing diagram we can get</p><ol><li>where a class is located in the timing diagram and what it is responsible for.</li><li>what are the upstream and downstream dependencies of a class in the current flow</li></ol><h1 id="Class-Diagram"><a href="#Class-Diagram" class="headerlink" title="Class Diagram"></a>Class Diagram</h1><p>A class diagram describes the dependencies between classes (combinations, inheritance, interfaces).</p><h2 id="Class-diagram-elements"><a href="#Class-diagram-elements" class="headerlink" title="Class diagram elements"></a>Class diagram elements</h2><ol><li>inheritance, implementation, and combinatorial dependencies</li><li>key core methods of the class. (Remember to add comments to describe what capabilities the class extends)</li></ol><p>Here is a class diagram of a Spring ioc container.</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)<img src="https://s2.loli.net/2023/11/05/MYiQfKpJUqdLWc1.webp"></p><p>The arrow relationship of a class diagram Describes whether there is a dependency, integration&#x2F;combination&#x2F;interface implementation between two classes.</p><h2 id="When-to-use-class-diagrams"><a href="#When-to-use-class-diagrams" class="headerlink" title="When to use class diagrams?"></a>When to use class diagrams?</h2><h3 id="Simple-business-processes-don’t-need-class-diagrams"><a href="#Simple-business-processes-don’t-need-class-diagrams" class="headerlink" title="Simple business processes don’t need class diagrams."></a>Simple business processes don’t need class diagrams.</h3><p>For example, if there is only one implementation of an interface, and there is no complex inheritance, you don’t need to write a class diagram.</p><h3 id="Complex-inheritance-systems-require-class-diagrams"><a href="#Complex-inheritance-systems-require-class-diagrams" class="headerlink" title="Complex inheritance systems require class diagrams."></a>Complex inheritance systems require class diagrams.</h3><p>Complex inheritance systems require class diagrams. In order to achieve maximum reusability and extensibility, a large number of inheritance and interface implementation classes are used to improve scalability and reusability. At this time, without a class diagram, it is impossible to fully understand the inheritance system of an interface or a class. It’s not clear where a class fits into the inheritance system.</p><h2 id="Why-can’t-class-diagrams-be-classified-as-flowcharts"><a href="#Why-can’t-class-diagrams-be-classified-as-flowcharts" class="headerlink" title="Why can’t class diagrams be classified as flowcharts?"></a>Why can’t class diagrams be classified as flowcharts?</h2><p>In general, only flowcharts and timing diagrams can specify a class. When the reader sees several classes in a flowchart or timing diagram, he or she wonders how the classes are related.</p><p>At this point, you can choose whether you need to organize a class diagram to show the dependencies.</p><h2 id="Class-diagrams-and-design-patterns"><a href="#Class-diagrams-and-design-patterns" class="headerlink" title="Class diagrams and design patterns"></a>Class diagrams and design patterns</h2><p>Class diagrams are used in the introduction of design patterns. For example, the class diagram for the Factory pattern<img src="https://s2.loli.net/2023/11/05/ZPh7eBrF8LYxq1s.webp"></p><h1 id="System-Architecture-Diagram"><a href="#System-Architecture-Diagram" class="headerlink" title="System Architecture Diagram"></a>System Architecture Diagram</h1><p>System architecture diagrams are used to describe the components, modules, etc. within an application. In general, there are two types of diagrams: full system architecture diagram and single application architecture diagram.</p><h2 id="System-Architecture-Diagram-1"><a href="#System-Architecture-Diagram-1" class="headerlink" title="System Architecture Diagram"></a>System Architecture Diagram</h2><p>Business Architecture Diagrams show the hierarchy and relationship between various systems of an organization from the perspective of business logic.A business architecture diagram is a neat presentation of the hierarchy and relationships between the various systems of an organization from the perspective of business logic.<img src="https://s2.loli.net/2023/11/05/uXvIAkdPpbQch1m.webp"></p><h2 id="Single-Application-Architecture-Diagram"><a href="#Single-Application-Architecture-Diagram" class="headerlink" title="Single Application Architecture Diagram"></a>Single Application Architecture Diagram</h2><p>Single-application business architecture diagrams can be categorized into the classic three-tier structure according to the hierarchy: presentation layer, business logic layer, and data layer.<img src="https://s2.loli.net/2023/11/05/agOcbteNo67mxB2.webp"></p><h1 id="Application-Architecture-Diagram"><a href="#Application-Architecture-Diagram" class="headerlink" title="Application Architecture Diagram"></a>Application Architecture Diagram</h1><p>The application architecture diagram focuses on the location of the application within the system. (Similar to the system-wide Business Architecture Diagram above). The application architecture needs to describe the location of the application within the system.</p><p>The following is an application architecture diagram for a coupon system. It basically describes the position of an application service within the entire coupon-related microservices.</p><p>For example, the CouponJob service is responsible for issuing coupons, and there are various forms of coupon activities associated with this service in the upper layer. The CouponJob service is responsible for issuing coupons, and there will be various forms of coupon activities associated with this service. Redemption codes, coupon pages, and so on, all rely on the CouponJob coupon service.</p><p><img src="https://s2.loli.net/2023/11/05/gzm5xUhOsBcNAMk.webp"></p><p>Inter-application dependencies Ideally, the dependencies should be unidirectional. If there are obvious cyclic dependencies between two upstream and downstream services, then you need to consider whether the two systems are heavily coupled, and whether the two systems implement similar functionality. Do they need to be merged into a single service?</p><p>Is it more appropriate to put business modules with strong correlation into one service?</p><p>Application architecture describes the dependencies between applications and where they are located in the system. Is it a top-tier application or a bottom-tier application? When designing an application architecture diagram, it is not recommended to include the module divisions within the application in the application architecture diagram, as this will result in an overly large architecture diagram, which is not conducive to understanding.</p><p>This will result in a large diagram that is not easy to understand <strong>An architecture diagram only needs to describe a single view</strong>. (with a single responsibility as far as possible)</p><p>The following is the application architecture diagram for HBase<img src="https://s2.loli.net/2023/11/05/WeKA7odP8CkMpOz.webp"></p><p>This can be seen in the table</p><ul><li>Zookeeper is responsible for cluster management tasks such as survivability monitoring.</li><li>master is not responsible for data read and write, only responsible for statements such as DDL build table, responsible for RegionServer failover process</li><li>RegionServer is responsible for receiving client read and write data</li><li>HDFS as a distributed storage, receive RS read and write (As for the RegionServer internal what modules, how to read and write naturally with the help of other architectural diagrams. Each architecture diagram describes the system from only one viewpoint)</li></ul><h1 id="Deployment-Architecture-Diagram"><a href="#Deployment-Architecture-Diagram" class="headerlink" title="Deployment Architecture Diagram"></a>Deployment Architecture Diagram</h1><p>Deployment architecture diagrams focus on describing how an application is deployed online.</p><h2 id="Deployment-architecture-diagrams-focus-on-the-core-concerns"><a href="#Deployment-architecture-diagrams-focus-on-the-core-concerns" class="headerlink" title="Deployment architecture diagrams focus on the core concerns"></a>Deployment architecture diagrams focus on the core concerns</h2><ul><li><p>Whether traffic is coming from the admin side or the user side.</p><ul><li>Is it authenticated?</li></ul></li><li><p>Which nginxes is the traffic coming from, public or intranet, and which domain name (public intranet, different domain name).</p></li><li><p>Whether the application is load balanced, what is the policy?</p></li><li><p>Whether the application is deployed in multiple server rooms, and in which server rooms the application is deployed.</p><ul><li>How the traffic is routed between the server rooms; whether there are cross server room calls.</li><li>Whether the application is setup by user dimension</li><li>Whether or not the server room routing is done by region dimension</li><li>What are the routing rules for rpc calls and where are they managed?</li></ul></li><li><p>Deployment room Is it a virtual machine, physical machine, or Docker. private cloud, public cloud, or hybrid cloud architecture</p></li><li><p>Are the dependent databases and applications deployed in the same machine room?</p></li><li><p>Other middleware such as MQ, Redis, Es, etc. Where is the machine room deployed. Is it across server rooms, etc.</p></li></ul><p>The following deployment architecture describes The application is deployed on containers and user requests are SLB load balanced. Static resource access, database services are deployed on RDS.</p><p><img src="https://s2.loli.net/2023/11/05/SCqvBjE5oauVKyt.webp"></p><h1 id="All-link-calls-Structural-diagram"><a href="#All-link-calls-Structural-diagram" class="headerlink" title="All-link calls Structural diagram"></a>All-link calls Structural diagram</h1><p>The following scenarios require grooming of the full-link upstream and downstream dependency graphs</p><ol><li>Interface parameters need to be changed and interface implementation needs to be changed. Confirm that it does not affect the upstream (generally the implementation details are blocked to the upstream.)</li><li>Downgrade the interface, migrate to a new interface.</li><li>notify the upstream of flow limiting, fusing, and degrading policies.</li></ol><p>When the above occurs, we need to realize which upstreams depend on us, in what business scenarios they depend on us, and how to unify the upstream and downstream dependencies on system rpc interfaces, http interfaces, mq consumers, shared databases, and so on, in terms of importance and priority. This can be done in the form of a table, for example<img src="https://s2.loli.net/2023/11/04/3zbSgrtCMoUO5lp.webp"> For method calls within the project, module dependencies, we can quickly sort out the upstream dependencies. For example, using IDE shortcuts. However, in the case of microservices, we need to obtain the call chain diagram, and we can only rely on the service governance framework and code scanning tools to sort out the upstream dependencies of each interface.</p><p>This may reveal the need to authenticate the interface to prevent arbitrary callers from being able to invoke our service. At least we can be aware that the interface is being invoked to prevent high traffic and unreasonable business scenarios from being invoked. It also makes it easier for us to upgrade in the future.</p><h2 id="Each-architecture-diagram-is-a-unique-perspective"><a href="#Each-architecture-diagram-is-a-unique-perspective" class="headerlink" title="Each architecture diagram is a unique perspective"></a>Each architecture diagram is a unique perspective</h2><h2 id="Architecture-diagram-perspectives"><a href="#Architecture-diagram-perspectives" class="headerlink" title="Architecture diagram perspectives"></a>Architecture diagram perspectives</h2><ul><li><p>Use Case Diagram: Users, product managers, and testers can visualize and clearly understand the usage scenarios of our system and what it can do.</p></li><li><p>Database Entity Diagram: business architects, business experts, product managers to understand the modeling process, whether there are unclear domain delineation and domain coupling problems.</p></li><li><p>Flowchart: for development engineers, business architects, business experts to more clearly understand the core process, how the data flow between components, how each component calls the</p></li><li><p>System Architecture Diagram: Development engineers can use this to quickly understand the total number of modules in the system, how to carry out layering. The responsibilities of each layer.</p></li><li><p>Application architecture diagram: Business architects can see whether there is coupling between microservices, whether the dependency is unreasonable, and whether the boundaries are clear. Development engineers can see where the system they are responsible for fits into the overall architecture.</p></li><li><p>Application Deployment Architecture: Business architects, development engineers, and operation engineers can have a clearer understanding of the service deployment environment, traffic ingress, load balancing strategy, routing strategy, middleware deployment, and so on.</p></li></ul><p>Above we have analysed what specific diagrams should be included in the design document, in addition to the</p><h1 id="Design-Documentation-Basics"><a href="#Design-Documentation-Basics" class="headerlink" title="Design Documentation Basics"></a>Design Documentation Basics</h1><ol><li>The design document should be written according to the following basic principlesDocumentation is for people. To be thorough enough for the target audience, each reader’s perspective must be taken into account.</li><li>Can be landed. Require documents with sufficient design details, can accurately predict the technical difficulties, for technical difficulties to put forward a clear solution (to cover the key requirements and processes)</li><li>An architecture diagram should only describe one view. The pursuit of a large and comprehensive architecture diagram will only make it difficult to read and impossible to modify it. (Try to have a single responsibility)</li></ol>]]></content>
    
    
    <summary type="html">The view from the side of a mountain is different from the view from near and far. In order to better understand the software system, we need to use a variety of diagramming tools to understand the system design from different perspectives.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="different" scheme="https://www.nablepart.com/tags/different/"/>
    
    <category term="understand" scheme="https://www.nablepart.com/tags/understand/"/>
    
    <category term="technical" scheme="https://www.nablepart.com/tags/technical/"/>
    
    <category term="generates" scheme="https://www.nablepart.com/tags/generates/"/>
    
    <category term="proposal" scheme="https://www.nablepart.com/tags/proposal/"/>
    
    <category term="mountain" scheme="https://www.nablepart.com/tags/mountain/"/>
    
    <category term="diagramming" scheme="https://www.nablepart.com/tags/diagramming/"/>
    
    <category term="design" scheme="https://www.nablepart.com/tags/design/"/>
    
  </entry>
  
  <entry>
    <title>Tomcat startup process (source code analysis)</title>
    <link href="https://www.nablepart.com/8704374db021/"/>
    <id>https://www.nablepart.com/8704374db021/</id>
    <published>2023-11-04T16:10:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>The main method of Tomcat startup is Bootstrap with the following source code:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">Bootstrap daemon;</span><br><span class="line"><span class="type">Bootstrap</span> <span class="variable">bootstrap</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Bootstrap</span>();</span><br><span class="line">bootstrap.init();</span><br><span class="line">daemon = bootstrap;</span><br><span class="line">...</span><br><span class="line"></span><br><span class="line">daemon.setAwait(<span class="literal">true</span>);</span><br><span class="line">daemon.load(args);</span><br><span class="line">daemon.start();</span><br></pre></td></tr></table></figure><h2 id="1-Bootstrap-init-method"><a href="#1-Bootstrap-init-method" class="headerlink" title="1. Bootstrap ## init method"></a>1. Bootstrap ## init method</h2><p>The init method does two main things:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">initClassLoaders();</span><br><span class="line">Thread.currentThread().setContextClassLoader(catalinaLoader);</span><br><span class="line">SecurityClassLoad.securityClassLoad(catalinaLoader);</span><br><span class="line">​</span><br><span class="line"><span class="type">Class</span> <span class="variable">startupClass</span> <span class="operator">=</span> catalinaLoader.loadClass(<span class="string">&quot;org.apache.catalina.startup.Catalina&quot;</span>);</span><br><span class="line"><span class="type">Object</span> <span class="variable">startupInstance</span> <span class="operator">=</span> startupClass.getConstructor().newInstance();</span><br><span class="line">​</span><br><span class="line"><span class="type">String</span> <span class="variable">methodName</span> <span class="operator">=</span> <span class="string">&quot;setParentClassLoader&quot;</span>;</span><br><span class="line">Class paramTypes[] = <span class="keyword">new</span> <span class="title class_">Class</span>[<span class="number">1</span>];</span><br><span class="line">paramTypes[<span class="number">0</span>] = Class.forName(<span class="string">&quot;java.lang.ClassLoader&quot;</span>);</span><br><span class="line">Object paramValues[] = <span class="keyword">new</span> <span class="title class_">Object</span>[<span class="number">1</span>];</span><br><span class="line">paramValues[<span class="number">0</span>] = sharedLoader;</span><br><span class="line"><span class="type">Method</span> <span class="variable">method</span> <span class="operator">=</span></span><br><span class="line">            startupInstance.getClass().getMethod(methodName, paramTypes);</span><br><span class="line">method.invoke(startupInstance, paramValues);</span><br><span class="line">​</span><br><span class="line">catalinaDaemon = startupInstance;</span><br></pre></td></tr></table></figure><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">commonLoader = createClassLoader(<span class="string">&quot;common&quot;</span>, <span class="literal">null</span>);</span><br><span class="line"><span class="keyword">if</span> (commonLoader == <span class="literal">null</span>) &#123;</span><br><span class="line">    commonLoader = <span class="built_in">this</span>.getClass().getClassLoader();</span><br><span class="line">&#125;</span><br><span class="line">catalinaLoader = createClassLoader(<span class="string">&quot;server&quot;</span>, commonLoader);</span><br><span class="line">sharedLoader = createClassLoader(<span class="string">&quot;shared&quot;</span>, commonLoader);</span><br></pre></td></tr></table></figure><h2 id="2-Bootstrap-load"><a href="#2-Bootstrap-load" class="headerlink" title="2. Bootstrap #load"></a>2. Bootstrap #load</h2><p>Call Catalina’s load method via reflection (why via reflection?)</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="type">String</span> <span class="variable">methodName</span> <span class="operator">=</span> <span class="string">&quot;load&quot;</span>;</span><br><span class="line"><span class="type">Method</span> <span class="variable">method</span> <span class="operator">=</span>catalinaDaemon.getClass().getMethod(methodName, paramTypes);</span><br><span class="line">method.invoke(catalinaDaemon, param);</span><br></pre></td></tr></table></figure><p>Catalina’s load method is divided into three main steps:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">initDirs();</span><br><span class="line">initNaming();</span><br><span class="line">​</span><br><span class="line"><span class="type">Digester</span> <span class="variable">digester</span> <span class="operator">=</span> createStartDigester();</span><br><span class="line"><span class="type">InputSource</span> <span class="variable">inputSource</span> <span class="operator">=</span> <span class="literal">null</span>;</span><br><span class="line"><span class="type">InputStream</span> <span class="variable">inputStream</span> <span class="operator">=</span> <span class="literal">null</span>;</span><br><span class="line">inputSource.setByteStream(inputStream);</span><br><span class="line">digester.push(<span class="built_in">this</span>);</span><br><span class="line">digester.parse(inputSource);</span><br><span class="line">​</span><br><span class="line"></span><br><span class="line"> getServer().init();</span><br></pre></td></tr></table></figure><p>StandardServer’s init method (which actually calls initInternal because it inherits from LifecycleBase, and the same for the methods that follow it), mainly calls the init method of Service, which is implemented as StandardService. (And of course scans for class loader (of course, will also scan the class loader for resources, ignored here)</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">for</span> (Service service : services) &#123;</span><br><span class="line">    service.init();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The StandardService’s init method does a few things:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">engine.init();</span><br><span class="line">​</span><br><span class="line">mapperListener.init();</span><br><span class="line">​</span><br><span class="line"><span class="keyword">for</span> (Connector connector : connectors) &#123;</span><br><span class="line">    connector.init();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>StandardEngine in the init will call the parent class ContainerBase init, mainly to initialize a startup thread pool.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="type">BlockingQueue</span> <span class="variable">startStopQueue</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">LinkedBlockingQueue</span>&lt;&gt;();</span><br><span class="line">startStopExecutor = <span class="keyword">new</span> <span class="title class_">ThreadPoolExecutor</span>(</span><br><span class="line">                getStartStopThreadsInternal(),</span><br><span class="line">                getStartStopThreadsInternal(), <span class="number">10</span>, TimeUnit.SECONDS,</span><br><span class="line">                startStopQueue,</span><br><span class="line">                <span class="keyword">new</span> <span class="title class_">StartStopThreadFactory</span>(getName() + <span class="string">&quot;-startStop-&quot;</span>));</span><br><span class="line">startStopExecutor.allowCoreThreadTimeOut(<span class="literal">true</span>);</span><br></pre></td></tr></table></figure><p>Connector’s init method is the main one:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">adapter = <span class="keyword">new</span> <span class="title class_">CoyoteAdapter</span>(<span class="built_in">this</span>);</span><br><span class="line">protocolHandler.setAdapter(adapter);</span><br><span class="line">protocolHandler.init();</span><br></pre></td></tr></table></figure><p>Endpoint is an interface, the abstract class for AbstractEndpoint, init method will call bindWithCleanup method, and then the internal will call the bind abstract method, different is the implementation of the class processing is not the same. Take NioEndpoint as an example: it mainly creates ServerSocketChannel and then binds the port.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">bind</span><span class="params">()</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line">    initServerSocket();</span><br><span class="line">    setStopLatch(<span class="keyword">new</span> <span class="title class_">CountDownLatch</span>(<span class="number">1</span>));</span><br><span class="line"></span><br><span class="line">    initialiseSsl();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">protected</span> <span class="keyword">void</span> <span class="title function_">initServerSocket</span><span class="params">()</span> <span class="keyword">throws</span> Exception &#123;</span><br><span class="line">    serverSock = ServerSocketChannel.open();</span><br><span class="line">    socketProperties.setProperties(serverSock.socket());</span><br><span class="line">    <span class="type">InetSocketAddress</span> <span class="variable">addr</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">InetSocketAddress</span>(getAddress(), getPortWithOffset());</span><br><span class="line">    serverSock.socket().bind(addr,getAcceptCount());</span><br><span class="line"></span><br><span class="line">    serverSock.configureBlocking(<span class="literal">true</span>);</span><br><span class="line"></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="3-Bootstrap-start"><a href="#3-Bootstrap-start" class="headerlink" title="3. Bootstrap#start"></a>3. Bootstrap#start</h2><p>Calling Catalina’s start method via reflection</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="type">Method</span> <span class="variable">method</span> <span class="operator">=</span> catalinaDaemon.getClass().getMethod(<span class="string">&quot;start&quot;</span>, (Class [])<span class="literal">null</span>);</span><br><span class="line">method.invoke(catalinaDaemon, (Object [])<span class="literal">null</span>);</span><br></pre></td></tr></table></figure><p>There are three main steps in Catalina’s START method:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">getServer().start();</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (shutdownHook == <span class="literal">null</span>) &#123;</span><br><span class="line">    shutdownHook = <span class="keyword">new</span> <span class="title class_">CatalinaShutdownHook</span>();</span><br><span class="line">&#125;</span><br><span class="line">Runtime.getRuntime().addShutdownHook(shutdownHook);</span><br></pre></td></tr></table></figure><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> (await) &#123;</span><br><span class="line">    await();</span><br><span class="line">    stop();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The start method of Server mainly calls the start method of Service, which is internally implemented as startInternal method because it inherits LifecycBase. Some of the following classes are analyzed in the same way.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">for</span> (Service service : services) &#123;</span><br><span class="line">    service.start();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Service’s start method, which does three main things.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">engine.start();</span><br><span class="line">​</span><br><span class="line">mapperListener.start();</span><br><span class="line">​</span><br><span class="line"><span class="keyword">for</span> (Connector connector: connectors) &#123;</span><br><span class="line">    connector.start();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Engine’s start method, which mainly calls the startInternal method of the parent class ContainerBase, does the following:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">Container children[] = findChildren();</span><br><span class="line">List&gt; results = <span class="keyword">new</span> <span class="title class_">ArrayList</span>&lt;&gt;();</span><br><span class="line"><span class="keyword">for</span> (Container child : children) &#123;</span><br><span class="line">    results.add(startStopExecutor.submit(<span class="keyword">new</span> <span class="title class_">StartChild</span>(child)));</span><br><span class="line">&#125;</span><br><span class="line"><span class="keyword">for</span> (Future result : results) &#123;</span><br><span class="line">    result.get();</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (pipeline <span class="keyword">instanceof</span> Lifecycle) &#123;</span><br><span class="line">    ((Lifecycle) pipeline).start();</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">threadStart();</span><br></pre></td></tr></table></figure><p>Host (implementation class for StandardHost) in the start no special, will call the parent class ContainerBase startInternal method, will continue to look for child containers to call the start method</p><p>In the Context container, the execution logic of the start method will be more complex:</p><p>Create a Loader, which will contain a class loader internally, and call the start method.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="type">WebappLoader</span> <span class="variable">webappLoader</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">WebappLoader</span>();</span><br><span class="line">webappLoader.setDelegate(getDelegate());</span><br><span class="line">setLoader(webappLoader);</span><br><span class="line">​</span><br><span class="line"><span class="type">Loader</span> <span class="variable">loader</span> <span class="operator">=</span> getLoader();</span><br><span class="line"><span class="keyword">if</span> (loader <span class="keyword">instanceof</span> Lifecycle) &#123;</span><br><span class="line">    ((Lifecycle) loader).start();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Find the subcontainer (Wrapper) and call the start method.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> (Container child : findChildren()) &#123;</span><br><span class="line">    <span class="keyword">if</span> (!child.getState().isAvailable()) &#123;</span><br><span class="line">        child.start();</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Calling the onStartup method of the ServletContainerInitializer interface</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> (Map.Entry&gt;&gt; entry :initializers.entrySet()) &#123;</span><br><span class="line">    entry.getKey().onStartup(entry.getValue(),getServletContext());</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Looking for subcontainers that need to be loaded at startup time, the subcontainer Wrapper loads the Servlet internally and calls the Servlet#init method. This kind is loaded at startup, by default, it is delayed loading.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">loadOnStartup(findChildren());</span><br></pre></td></tr></table></figure><p>There is nothing special about the Wrapper container.</p><p>Connector internally calls the start method of ProtocolHandler</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">protocolHandler.start();</span><br></pre></td></tr></table></figure><p>ProtocolHandler is an interface with an abstract class AbstractProtocol, which internally implements the main logic:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">endpoint.start();</span><br><span class="line">​</span><br><span class="line">asyncTimeout = <span class="keyword">new</span> <span class="title class_">AsyncTimeout</span>();</span><br><span class="line"><span class="type">Thread</span> <span class="variable">timeoutThread</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Thread</span>(asyncTimeout, getNameInternal() + <span class="string">&quot;-AsyncTimeout&quot;</span>);</span><br><span class="line">timeoutThread.start();</span><br></pre></td></tr></table></figure><p>Endpoint is an interface , the abstract class is AbstractEndpoint, the internal will call startInternal, implemented by different implementation of the class . NioEndpoint as an example. The main logic is as follows::</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">createExecutor();</span><br><span class="line"></span><br><span class="line">initializeConnectionLatch();</span><br><span class="line"></span><br><span class="line">poller = <span class="keyword">new</span> <span class="title class_">Poller</span>();</span><br><span class="line"><span class="type">Thread</span> <span class="variable">pollerThread</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Thread</span>(poller, getName() + <span class="string">&quot;-Poller&quot;</span>);</span><br><span class="line">pollerThread.setPriority(threadPriority);</span><br><span class="line">pollerThread.setDaemon(<span class="literal">true</span>);</span><br><span class="line">pollerThread.start();</span><br><span class="line">​</span><br><span class="line"></span><br><span class="line">startAcceptorThread();</span><br></pre></td></tr></table></figure><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">createExecutor</span><span class="params">()</span> &#123;</span><br><span class="line">    internalExecutor = <span class="literal">true</span>;</span><br><span class="line">    <span class="type">TaskQueue</span> <span class="variable">taskqueue</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">TaskQueue</span>();</span><br><span class="line">    <span class="type">TaskThreadFactory</span> <span class="variable">tf</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">TaskThreadFactory</span>(getName() + <span class="string">&quot;-exec-&quot;</span>, daemon, getThreadPriority());</span><br><span class="line">    executor = <span class="keyword">new</span> <span class="title class_">ThreadPoolExecutor</span>(getMinSpareThreads(), getMaxThreads(), <span class="number">60</span>, TimeUnit.SECONDS,taskqueue, tf);</span><br><span class="line">    taskqueue.setParent( (ThreadPoolExecutor) executor);</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>run method of the runnable interface, which essentially calls Catalina’s stop</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">Catalina.<span class="built_in">this</span>.stop();</span><br></pre></td></tr></table></figure><p>Catalina#stop method, which essentially calls the Server’s stop and destroy</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="type">Server</span> <span class="variable">s</span> <span class="operator">=</span> getServer();</span><br><span class="line"><span class="type">LifecycleState</span> <span class="variable">state</span> <span class="operator">=</span> s.getState();</span><br><span class="line"><span class="keyword">if</span> (LifecycleState.STOPPING_PREP.compareTo(state) <span class="number">0</span></span><br><span class="line">                    &amp;&amp; LifecycleState.DESTROYED.compareTo(state) &gt;= <span class="number">0</span>) &#123;</span><br><span class="line"></span><br><span class="line">&#125; <span class="keyword">else</span> &#123;</span><br><span class="line">    s.stop();</span><br><span class="line">    s.destroy();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Internally, the Server’s await method will actually be called. By default, the port has a value, a ServerSocket is created, and the main thread blocks in the accept() method.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">awaitSocket = <span class="keyword">new</span> <span class="title class_">ServerSocket</span>(port, <span class="number">1</span>,</span><br><span class="line">                    InetAddress.getByName(address));</span><br><span class="line"><span class="keyword">while</span> (!stopAwait) &#123;</span><br><span class="line">    <span class="type">ServerSocket</span> <span class="variable">serverSocket</span> <span class="operator">=</span> awaitSocket;</span><br><span class="line">    <span class="type">Socket</span> <span class="variable">socket</span> <span class="operator">=</span> serverSocket.accept();</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>In the stopServer method in Catalina (which can be called with the command stop, which will call that method), in addition to calling Stop on the Server, if the port is greater than 0, it will also create a socket and send the SHUTDOWN string.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="type">Socket</span> <span class="variable">socket</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Socket</span>(s.getAddress(), s.getPort()；</span><br><span class="line"><span class="type">OutputStream</span> <span class="variable">stream</span> <span class="operator">=</span> socket.getOutputStream()；</span><br><span class="line"><span class="type">String</span> <span class="variable">shutdown</span> <span class="operator">=</span> s.getShutdown();</span><br><span class="line"><span class="keyword">for</span> (<span class="type">int</span> <span class="variable">i</span> <span class="operator">=</span> <span class="number">0</span>; i &lt; shutdown.length(); i++) &#123;</span><br><span class="line">    stream.write(shutdown.charAt(i));</span><br><span class="line">&#125;</span><br><span class="line">stream.flush();</span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">The main method of Tomcat startup is Bootstrap, Bootstrap#init method The init method does two main things: Initializes the class loader. Reflection generates a Catalina instance</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="method" scheme="https://www.nablepart.com/tags/method/"/>
    
    <category term="Tomcat" scheme="https://www.nablepart.com/tags/Tomcat/"/>
    
    <category term="finally" scheme="https://www.nablepart.com/tags/finally/"/>
    
    <category term="startup" scheme="https://www.nablepart.com/tags/startup/"/>
    
    <category term="Bootstrap" scheme="https://www.nablepart.com/tags/Bootstrap/"/>
    
    <category term="Reflection" scheme="https://www.nablepart.com/tags/Reflection/"/>
    
    <category term="generates" scheme="https://www.nablepart.com/tags/generates/"/>
    
    <category term="main" scheme="https://www.nablepart.com/tags/main/"/>
    
  </entry>
  
  <entry>
    <title>The overall architecture of Tomcat</title>
    <link href="https://www.nablepart.com/4c59a46b71bc/"/>
    <id>https://www.nablepart.com/4c59a46b71bc/</id>
    <published>2023-11-04T16:09:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h1 id="Tomcat-overall-architecture"><a href="#Tomcat-overall-architecture" class="headerlink" title="Tomcat overall architecture"></a>Tomcat overall architecture</h1><p>Understanding the overall architecture of Tomcat is an essential part of learning how Tomcat works. The text will introduce the core components of Tomcat, and then finally introduce the source code level how to link Tomcat access to the Servlet process.</p><h2 id="1-General-Architecture"><a href="#1-General-Architecture" class="headerlink" title="1. General Architecture"></a>1. General Architecture</h2><p>Tomcat has to fulfill 2 core functions:</p><ol><li>handling socket connections, responsible for the conversion of network byte streams to Request and Response objects.</li><li>Load and manage Servlets, as well as specific processing Request requests.</li></ol><p>Therefore, Tomcat designed two core components: connector (Connector) and container (Container) to deal with these two things. Connector is responsible for external communication , the container is responsible for internal processing .</p><p>Tocamt supports the following I&#x2F;O models:</p><ul><li>NIO: non-blocking IO, implemented using the Java NIO class library .</li><li>NIO2: asynchronous IO, using JDK7 latest NIO2 class library implementation.</li><li>APR: implemented using the Apache portable runtime library, a native library written in C&#x2F;C++.</li></ul><p>Tomcat supports the application layer protocols are:</p><ul><li>Http&#x2F;1.1: the access protocol used by most web applications.</li><li>AJP: for integration with web servers (eg: Apache)</li><li>HTTP&#x2F;2: Http2.0 protocol substantially improves the performance of the Web.</li></ul><p>Tomcat in order to support a variety of IO models and application layer protocols , a container may dock multiple connectors . But a single connector or container can not provide services to the outside world , they need to be combined to work , assembled as a whole is called Service component . Service is just outside the connector and the container more than one layer of the package , put them together . Tomcat may have more than one Service, so the design is also out of flexibility considerations. The relationship diagram is as follows:</p><p><img src="https://s2.loli.net/2023/11/05/6o5bzxmNQq3DMJg.webp"></p><p>As you can see from the diagram:</p><ul><li>The top level is the Server, which in this case is also an instance of Tomcat.</li><li>A Server has one or more Services, and a Service has multiple connectors and a container.</li><li>Connectors and containers communicate with each other through the standard ServletRequest and ServletResponse.</li></ul><h2 id="2-connectors"><a href="#2-connectors" class="headerlink" title="2. connectors"></a>2. connectors</h2><p>Connector to the Servlet container shielding protocols and I&#x2F;O model and other differences, whether HTTP or AJP, from the container always get a standard ServletRequest object.</p><p>Connector roughly to do 3 highly cohesive functions:</p><ul><li>Network communication.</li><li>Application layer protocol parsing.</li><li>Tomcat Request&#x2F;Response and ServletRequest&#x2F;ServletResponse conversion.</li></ul><p>Corresponding Tomcat designed three components to achieve these three functions, respectively, Endpoint, Processor and Adapter. overall logic: EndPoint is responsible for providing byte streams to the Processor, Processor is responsible for providing Tomcat Request to the Adapter, Adapter is responsible for Provide ServletRequest object to the container.</p><p>As the I&#x2F;O model and application protocols can be freely combined , such as NIO + HTTP or NIO2 + AJP. Tomcat will be the network communication and application layer protocol parsing considered together , designed an interface called ProtocolHandler to encapsulate these two changes. Specific combinations are: Http11NIOProtocol and AjpNioProtocol. of course, also designed a series of abstract base classes to encapsulate the generic part, such as: AbstractProtocol implements the ProtocolHandler interface, the entire inheritance relationship is as follows:</p><p><img src="https://s2.loli.net/2023/11/05/LBDU4JxKs8hCGHN.webp"></p><p>To summarize: the three core components of the connector: EndPoint, Processor and Adapter to do three things, of which EndPoint and Processor together into an abstract PortocolHandler component, the relationship is shown in the following figure</p><p><img src="https://s2.loli.net/2023/11/05/XtnJc2679eaz3LQ.webp"></p><h3 id="2-1-ProtocolHandler-Components"><a href="#2-1-ProtocolHandler-Components" class="headerlink" title="2.1 ProtocolHandler Components"></a>2.1 ProtocolHandler Components</h3><p>ProtocolHandler internally includes EndPoint and Processor, the following describes how they work:</p><p><strong>EndPoint</strong>: communication endpoint, is a concrete Socket receive and send processor, is an abstraction of the transport layer. Therefore EndPoint is used to implement the TCP&#x2F;IP protocol.EndPoint is an interface, AbstractEndPoint is its abstract implementation of the class, the abstract class specific subclasses, there are two important sub-components: Acceptor and SocketProcessor.</p><p>Acceptor is used to listen for socket connection requests. socketProcessor is used to process incoming socket requests, implements the Runnable interface, and calls the Processor in the run method for processing. In order to increase the processing power, SocketProcessor is submitted to a thread pool for execution. This thread pool is Executor.</p><p><strong>Processor</strong>: If EndPoint is used to implement the TCP&#x2F;IP protocol, then Processor is used to implement the HTTP protocol.Processor receives the Socket from Endpoint, reads the byte stream and parses it into Tomcat Requst and Response objects, and submits it to the container for processing via the Adapter will be submitted to the container processing , Processor is an abstraction of the application layer protocol . Processor is an interface that defines the processing of requests and other methods , its abstract implementation class AbstactProcessor encapsulates some of the properties common to the protocol. Specific implementations include : HTTP11Processor, which implements protocol-specific parsing methods and request processing.</p><p>The detailed component relationships in the connector are as follows: <img src="https://s2.loli.net/2023/11/04/UM4t8egaPXIKZ7Q.webp"></p><h3 id="2-2-Adapter"><a href="#2-2-Adapter" class="headerlink" title="2.2 Adapter"></a>2.2 Adapter</h3><p>Because of the different protocols, the client sends different request information, so Tomcat defined its own Request class to “store” the request information. protocolHandler interface is responsible for that is the system request and generate TomcatRequest class. But this is not a standard HttpServletRequest, means that you can not use Tomcat Request as a parameter to invoke the container , Tomcat designer’s solution is to introduce the CoyoteAdapter, which is a classic use of the adapter pattern , the connector calls the CoyoteAdapter service method , pass in the Tomcat Request object, converted to a ServletRequest object, in the call container service method.</p><h2 id="3-Container"><a href="#3-Container" class="headerlink" title="3. Container"></a>3. Container</h2><h2 id="3-1-Hierarchy-of-containers"><a href="#3-1-Hierarchy-of-containers" class="headerlink" title="3.1 Hierarchy of containers"></a>3.1 Hierarchy of containers</h2><p>Tomcat has designed four types of containers, namely Engine, Host, Context and Wrapper, which are not parallel, but parent-child relationships. The schematic is as follows:</p><p><img src="https://s2.loli.net/2023/11/05/cHz2RjfgN98VpUT.webp"></p><p>The reason why it is designed into so many layers is that Tomcat has a layered architecture, and the Servlet container is very flexible.</p><ul><li>Context represents a Web application; Wrapper represents a Servlet, a Web application may have more than one Servlet.</li><li>Host represents a virtual host or a site , you can configure Tomcat multiple virtual host address , and a virtual host can be deployed under multiple Web applications .</li><li>Engine represents the engine, used to manage multiple virtual sites, a Service can only have a maximum of one Engine.</li></ul><p>You can use Tomcat’s server.xml configuration file to deepen your understanding of the Tomcat container.Tomcat uses a componentized design, which consists of components that are configurable, the outermost layer is the Server, and other components are configured in this top-level container according to certain format requirements.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"></span><br></pre></td></tr></table></figure><p>How does Tomcat manage these containers? It is managed through the combination pattern. Specific implementation: all container components implement the Container interface , so the combination pattern can make the user but the container object and the combination of container objects have consistency in the use of container objects . Here the single container object refers to the bottom of the Wrapper, the combination of container objects refers to the top of the Context, Host or Engine. Container interface is defined as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">Container</span> <span class="keyword">extends</span> <span class="title class_">Lifecycle</span> &#123;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">setName</span><span class="params">(String name)</span>;</span><br><span class="line">    <span class="keyword">public</span> Container <span class="title function_">getParent</span><span class="params">()</span>;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">setParent</span><span class="params">(Container container)</span>;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">addChild</span><span class="params">(Container child)</span>;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">removeChild</span><span class="params">(Container child)</span>;</span><br><span class="line">    <span class="keyword">public</span> Container <span class="title function_">findChild</span><span class="params">(String name)</span>;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>You can see the methods getParent, setParent, addChild and removeChild. It also implements the LiftCycle interface, which is used to centrally manage the life cycle of each component.</p><h3 id="3-2-The-request-localization-servlet-process"><a href="#3-2-The-request-localization-servlet-process" class="headerlink" title="3.2 The request-localization servlet process"></a>3.2 The request-localization servlet process</h3><p>Designed so many levels of containers, Tomcat how to determine which request is handled by the Wrapper container Servlet, Tomcat uses the Mapper component to accomplish this task.</p><p>The function of the Mapper component is to locate the user request URL to a Servlet, it works: Mapper component saves the configuration information of the Web application, in fact, it is ** container components and access path mapping relationship **. For example, the Host container configuration of the domain name, the Context container of the Web application path and the Wrapper container Servlet mapping path.</p><p>When a request arrives, the Mapper component locates a Servlet by parsing the domain name and path in the request URL and looking in its own saved Map. ** A request URL will end up locating only one Wrapper container, which is a Servlet**.</p><p>An example of the process of how to locate. Suppose there are two systems running under the same Tomcat, configured with two virtual domains: manage.shopping.com is used to manage users and products, there are two internal Web applications; user.shopping.com is used for end-customers to search for products and purchase, there are also two internal Web applications.</p><p>For such a deployment, Tomcat will create a Service component, an Engine container component, two Host subcontainers, and two Context subcontainers under each Host container. As a Web application will usually have multiple Servlets, each Context also has multiple Wrapper containers. The schematic is as follows:</p><p><img src="https://s2.loli.net/2023/11/05/4AwIVEsUkzC3cxO.webp"></p><p>Suppose: the URL to access is: <code>http://user.shopping.com:8080/order/buy</code>, how does Tomcat locate it?</p><ul><li><strong>First select Service and Engine based on protocol and port number</strong>.Tomcat each connector listens to a different port, the default Http connector listens to 8080, the default AJP listens to 8009.The above example accesses port 8080, which is picked up by the Http connector, and a connector belongs to a Service component, so the The Service component is identified. there is only one Engine container within the Service, so the Engine is also identified.</li><li>**Then according to the domain name of the selected Host **. Service and Engine identified, the Mapper component through the domain name in the URL to find the appropriate Host container. If you visit user.shopping.com in the example, you will find the Host2 container.</li><li><strong>Find the Context component based on the URL</strong> After Host is confirmed, Mapper matches the corresponding web application path based on the path in the URL. The example accesses &#x2F;order, so Context4, the Context container, is found.</li><li><strong>Finally, according to the URL path to find the Wrapper (Servlet)</strong> . Context to determine, Mapper and then according to the web.xml configuration of the Servlet mapping path to find the specific Wrapper and Servlet.</li></ul><p>Then there is a layer of parent-child containers to find a particular Servlet, but not only Servlet can handle the request. The actual parent and child containers on the lookup path will do some processing of the request. After the first Engine container does some processing on the request, it will pass the request to its own child container Host to continue processing, and finally the request will be passed to the Wrapper, and this calling process is <strong>implemented through the Pipeline-Valve pipeline</strong>.</p><h3 id="3-3-Pipeline-Valve"><a href="#3-3-Pipeline-Valve" class="headerlink" title="3.3 Pipeline-Valve"></a>3.3 Pipeline-Valve</h3><p>Pipeline-Valve is a chain-of-responsibility pattern. The chain-of-responsibility pattern refers to the fact that there are a number of processors that process a request sequentially, and each processor is responsible for doing its own processing, and then calling the next processor to continue the processing after it is done.</p><p>Valve represents a processing point such as permission authentication and log printing. The interface is as follows:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">Valve</span> &#123;</span><br><span class="line">  <span class="keyword">public</span> Valve <span class="title function_">getNext</span><span class="params">()</span>;</span><br><span class="line">  <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">setNext</span><span class="params">(Valve valve)</span>;</span><br><span class="line">  <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">invoke</span><span class="params">(Request request, Response response)</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The invoke method is what handles the request, and getNext calls the next Valve.</p><p>The Pipeline interface is as follows.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">interface</span> <span class="title class_">Pipeline</span> <span class="keyword">extends</span> <span class="title class_">Contained</span> &#123;</span><br><span class="line">  <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">addValve</span><span class="params">(Valve valve)</span>;</span><br><span class="line">  <span class="keyword">public</span> Valve <span class="title function_">getBasic</span><span class="params">()</span>;</span><br><span class="line">  <span class="keyword">public</span> <span class="keyword">void</span> <span class="title function_">setBasic</span><span class="params">(Valve valve)</span>;</span><br><span class="line">  <span class="keyword">public</span> Valve <span class="title function_">getFirst</span><span class="params">()</span>;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>There is the addValve method, and the Pipeline maintains a chain of Valves, which can be inserted into the Pipeline to do some processing of the request.There is no invoke method in the Pipeline, because the entire call chain is triggered by the Valve.After the Valve completes its own processing, it will call getNext(). invoke triggers the next Valve call.</p><p>Each container has a Pipeline object, as long as the first Valve is triggered all Valves in this container will be called. Different containers are called through the getBasic method, the BasicValve is at the end of the Valve, it is an essential Valve of the Pipeline, responsible for calling the first Valve of the Pipeline of the next layer of containers. the whole process is shown in the following figure:<img src="https://s2.loli.net/2023/11/05/FvJgqbcG6u9xI4Q.webp"></p><ol><li><p>The last Valve of the Wrapper container creates a Filter chain and calls the doFilter method, which finally calls the Servlet’s service method.</p><p>Filter and Valve difference:</p><ul><li>Valve is a Tomce private mechanism , and Tomcat infrastructure &#x2F; API is tightly coupled . Servlet API is a public standard , all Web containers including Jetty support Filter mechanism .</li><li>Valve work at the Web container level, intercepting all application requests. Filter work at the application level , can only intercept all the requests of a Web application . If you want to do the entire Web container interception, you must be realized through the Valve.</li></ul></li></ol><h2 id="4-Tomcat-access-to-the-Servlet-process"><a href="#4-Tomcat-access-to-the-Servlet-process" class="headerlink" title="4. Tomcat access to the Servlet process"></a>4. Tomcat access to the Servlet process</h2><p>   Record the Tomcat access Servlet process to facilitate source code tracking .</p><ol><li><p>When Tomcat can read data from the network , it will be encapsulated into a Runnable, thrown to the thread pool processing, which implements the Runnnable method for SocketProcessorBase.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">executor.execute(sc);</span><br></pre></td></tr></table></figure></li><li><p>Within the doRun method of NioEndpoint$SocketProcessor, the process method of ProtocolHandler is called.</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">state</span> = getHandler().process(socketWrapper, event)</span><br></pre></td></tr></table></figure></li><li><p>The AbstractProtocol method internally creates the Processor and then calls the Processor’s process method.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">processor = getProtocol().createProcessor();</span><br><span class="line">state = processor.process(wrapper, status);</span><br></pre></td></tr></table></figure></li><li><p>It will first go to the process method of the AbstractProcessorLight class, which will internally call the service method. This is an abstract class, implemented by concrete classes.</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">state</span> = service(socketWrapper)</span><br></pre></td></tr></table></figure></li><li><p>The Http11Processor will call the service method of the Adapter.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">getAdapter().service(request, response);</span><br></pre></td></tr></table></figure></li><li><p>CoyoteAdapter’s service method calls postParseRequest, which is called internally. Internally, the Mapper resolves the container (Host, Context) to which the request belongs.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">connector.getService().getMapper().map(serverName, decodedURI,</span><br><span class="line">version, request.getMappingData())</span><br></pre></td></tr></table></figure></li><li><p>Then Pipeline will be called to execute the Valve chain, which will successively execute StandardEngineValve, StandardHostValve, StandardContextValve, and StandardWrapperValve.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">connector.getService().getContainer().getPipeline().getFirst().invoke(</span><br><span class="line">                        request, response);</span><br></pre></td></tr></table></figure></li><li><p>In the invoke method of StandardWrapperValve, the Servlet is initialized (if uninitialized)</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">servlet</span> = wrapper.allocate()</span><br></pre></td></tr></table></figure></li><li><p>The ApplicationFilterChain is then constructed and the FilterChain is invoked</p><figure class="highlight vbscript"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">ApplicationFilterChain filterChain =</span><br><span class="line">                ApplicationFilterFactory.createFilterChain(<span class="built_in">request</span>, wrapper, servlet);</span><br><span class="line">//....</span><br><span class="line"></span><br><span class="line">filterChain.<span class="keyword">do</span><span class="built_in">Filter</span>(<span class="built_in">request</span>.getRequest(),</span><br><span class="line">                                    <span class="built_in">response</span>.getResponse());</span><br></pre></td></tr></table></figure></li><li><p>ApplicationFilterChain calls the Filter recursively.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">filter.doFilter(request, response, <span class="built_in">this</span>);</span><br></pre></td></tr></table></figure></li><li><p>After all Filters are called, the Servlet’s service method is called.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">servlet.service(request, response);</span><br></pre></td></tr></table></figure></li></ol><h2 id="5-References"><a href="#5-References" class="headerlink" title="5. References"></a>5. References</h2><ol><li>Tomcat &amp; Jetty: A Deep Dive - Geek Time</li><li>Tomcat source code branch 8.5.x</li></ol><p><img src="https://s2.loli.net/2023/11/05/2lxm4cRIXf3usqh.png"></p>]]></content>
    
    
    <summary type="html">Understanding the overall architecture of Tomcat can be learned from the working principle of Tomcat, is an essential part of learning Tomcat. The text will introduce the core components of Tomcat, and then finally introduced from the source code level how to put Tomcat process.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="technical" scheme="https://www.nablepart.com/tags/technical/"/>
    
    <category term="Tomcat" scheme="https://www.nablepart.com/tags/Tomcat/"/>
    
    <category term="Understanding" scheme="https://www.nablepart.com/tags/Understanding/"/>
    
    <category term="architecture" scheme="https://www.nablepart.com/tags/architecture/"/>
    
    <category term="essential" scheme="https://www.nablepart.com/tags/essential/"/>
    
    <category term="introduced" scheme="https://www.nablepart.com/tags/introduced/"/>
    
    <category term="finally" scheme="https://www.nablepart.com/tags/finally/"/>
    
  </entry>
  
  <entry>
    <title>Redis founder open source the smallest chat server, only 200 lines of code, a few days work has been 2.8K Star!</title>
    <link href="https://www.nablepart.com/3fa0e749e06b/"/>
    <id>https://www.nablepart.com/3fa0e749e06b/</id>
    <published>2023-11-04T16:08:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>At lunchtime, we were chatting about some interesting things about the founder of <code>Redis</code> in a technical exchange group, such as writing science fiction novels after leaving <code>Redis</code>.</p><p>Because I was curious about science fiction, TJ searched for it. And I found that the <code>Redis</code> author has actually done a new job recently!</p><h2 id="The-world’s-smallest-chat-server"><a href="#The-world’s-smallest-chat-server" class="headerlink" title="The world’s smallest chat server"></a>The world’s smallest chat server</h2><p>This time the Redis author’s new open source project is called:<a href="https://link.juejin.cn/?target=https://www.didispace.com/tj/tj-smallchat.html" title="https://www.didispace.com/tj/tj-smallchat.html"><strong>SmallChat</strong></a>。 As you can tell from the about content, this open source project is to build the smallest chat server.</p><p>From the content of the open source project, it is true, just the following:</p><p><img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/c71361924e60435eae111c5026fc4524~tplv-k3u1fbpfcp-jj-mark:3024:0:0:0:q75.awebp"></p><p>The code section is a staggering 200+ lines of code after removing a lot of comments, so it’s really the ultimate in streamlining.</p><h2 id="Origins-and-Future"><a href="#Origins-and-Future" class="headerlink" title="Origins and Future"></a>Origins and Future</h2><p>In the README of this project, there are no more instructions on how to use this project, but more on the background and future outlook of this project.</p><p>The content is also very worthwhile for everyone to savor, and TJ feels the mindset of a good developer, which is very worthwhile for everyone to learn. We can also learn more from this way of thinking to create something more interesting.</p><p>Here’s a look at his story:</p><p>Yesterday I was talking with a few friends, mostly front-end developers, a bit far from systems programming. We were reminiscing about the old days of IRC. I said: writing a very simple IRC server is an experience that everyone should do (I showed them my implementation written in TCL; I was shocked that I wrote it 18 years ago: time flies). There are some very interesting parts of such a program. A single process performing multiplexing, getting client state and trying to quickly access such state after the client has new data, and so on.</p><p>But then the discussion changed, and I thought I would show you a very simple example written in C. What is the smallest chat server you can write? To be really minimal, we shouldn’t need any proper clients. Even if it’s not very good, it should work with telnet or <code>netcat</code>. The main operation of the server is just to receive some chat lines and send them to all other clients, sometimes called a fanout operation. However, this requires proper functionality, and then buffering and so on. We’d like it to be simpler: let’s use kernel buffers for spoofing and pretend that we receive the full line from the client every time (this assumption is usually correct in practice, so things <code>kinda</code> work).</p><p>Well, with these tricks, we can implement a chat that even enables users to set their nicknames (removing spaces and comments, of course) in just 200 lines of code. Since I wrote this little program as an example for my friends, I decided to push it to GitHub as well.</p><p>About future work:</p><p>In the next few days, I will continue to modify this program in order to evolve it. The different evolutionary steps will be labeled according to the YouTube episodes of my Writing System software series (which covers such changes). Here’s my plan (it may change, but more or less this is what I want to cover):</p><ul><li>Implement buffering of reads and writes</li><li>Avoid linear arrays and use a dictionary data structure to hold client state</li><li>Write a proper client: line editing that can handle asynchronous events</li><li>Switch from select(2) to a more advanced API</li><li>Simple symmetric encryption for chat</li></ul><p>How about it? An interesting open source project was born. Well, today’s sharing is here. Finally, the old rules , open source address:<a href="https://github.com/antirez/smallchat">github.com&#x2F;antirez&#x2F;sma…</a> ， Go around the code if you’re interested.</p>]]></content>
    
    
    <summary type="html">At lunchtime, we were chatting about some interesting things about the founder of Redis in a technical exchange group, such as writing science fiction novels after leaving Redis.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="facilities" scheme="https://www.nablepart.com/tags/facilities/"/>
    
    <category term="intelligence" scheme="https://www.nablepart.com/tags/intelligence/"/>
    
    <category term="lunchtime" scheme="https://www.nablepart.com/tags/lunchtime/"/>
    
    <category term="chatting" scheme="https://www.nablepart.com/tags/chatting/"/>
    
    <category term="fictio" scheme="https://www.nablepart.com/tags/fictio/"/>
    
    <category term="technical" scheme="https://www.nablepart.com/tags/technical/"/>
    
    <category term="source" scheme="https://www.nablepart.com/tags/source/"/>
    
    <category term="Redis" scheme="https://www.nablepart.com/tags/Redis/"/>
    
  </entry>
  
  <entry>
    <title>Microsoft open source the strongest Python automation gods Playwright!</title>
    <link href="https://www.nablepart.com/6b39bec78b78/"/>
    <id>https://www.nablepart.com/6b39bec78b78/</id>
    <published>2023-11-04T16:07:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/05/h2aYw7jCZk5BdRn.webp"></p><p>Hello everyone, I am brother boy.</p><p>I believe that friends who have played the crawler know <code>selenium</code>, an automated testing tool. Writing a <code>Python</code> automation script to free your hands is basically a routine operation, and if you can’t crawl with a crawler, you can use automated tests to make up for it.</p><p>Although <code>selenium</code> has a complete documentation, but also requires a certain learning cost, for a pure white still some threshold.</p><p>Recently, Microsoft open-sourced a project called <code>playwright-python</code>, which is simply awesome! This project is for the <code> Python</code> language automation tools , even without writing code , you can realize the automation function .</p><p><img src="https://s2.loli.net/2023/11/05/pQZyunt543r8kAO.webp"></p><p>It may seem a bit unbelievable to you, but it’s that awesome. Let’s take a look at this magic tool together.</p><h2 id="1-Introduction-to-Playwright"><a href="#1-Introduction-to-Playwright" class="headerlink" title="1. Introduction to Playwright"></a>1. Introduction to Playwright</h2><p><code>Playwright</code> is a powerful Python library that automates major browser automations such as <code>Chromium</code>, <code>Firefox</code>, <code>WebKit</code>, etc. with just one API, and supports running in headless mode, header mode at the same time.</p><p>The automation technology provided by Playwright is green, powerful, reliable and fast, and supports <code>Linux</code>, <code>Mac</code> and <code>Windows</code> operating systems.</p><p><img src="https://s2.loli.net/2023/11/05/yuKScwrk3YpdPQE.webp"></p><h2 id="2-Using-Playwright"><a href="#2-Using-Playwright" class="headerlink" title="2. Using Playwright"></a>2. Using Playwright</h2><h3 id="Installation"><a href="#Installation" class="headerlink" title="Installation"></a>Installation</h3><p>Installation of <code>Playwright</code> is very simple, two steps.</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">pip install playwright</span><br><span class="line"></span><br><span class="line">python -m playwright install</span><br></pre></td></tr></table></figure><p>The two pip operations above are installed separately:</p><ul><li>Install the Playwright dependency library, which requires Python 3.7+.</li><li>Install driver files for Chromium, Firefox, WebKit and other browsers.</li></ul><h3 id="Recording"><a href="#Recording" class="headerlink" title="Recording"></a>Recording</h3><p>To use <code>Playwright</code> you don’t need to write a single line of code, we just need to operate the browser manually, it will record our actions and then generate the code script automatically.</p><p>Here is the recorded command <code>codegen</code>, just one line.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">python -m playwright codegen</span><br></pre></td></tr></table></figure><p>The usage of “codegen” can be viewed with “–help”, the simple usage is to add the url link directly after the command, or add “options” if otherwise needed.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">python -m playwright codegen --<span class="built_in">help</span></span><br><span class="line">Usage: index codegen [options] [url]</span><br><span class="line"></span><br><span class="line"><span class="built_in">open</span> page <span class="keyword">and</span> generate code <span class="keyword">for</span> user actions</span><br><span class="line"></span><br><span class="line">Options:</span><br><span class="line">  -o, --output   saves the generated script to a file</span><br><span class="line">  --target        language to use, one of javascript, python, python-<span class="keyword">async</span>, csharp (default: <span class="string">&quot;python&quot;</span>)</span><br><span class="line">  -h, --<span class="built_in">help</span>                display <span class="built_in">help</span> <span class="keyword">for</span> command</span><br><span class="line"></span><br><span class="line">Examples:</span><br><span class="line"></span><br><span class="line">  $ codegen</span><br><span class="line">  $ codegen --target=python</span><br><span class="line">  $ -b webkit codegen https://example.com</span><br></pre></td></tr></table></figure><p>options meaning:</p><ul><li>-o: save the recorded script to a file.</li><li>–target: specify the language to generate the script, there are two kinds, <code>JS</code> and <code>Python</code>, the default is Python.</li><li>–b: specify the browser driver</li></ul><p>For example, I want to search on <code>baidu.com</code>, use the <code>chromium</code> driver, and save the result as a <code>my.py</code> file in <code>python</code>.</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">python -m playwright codegen --target python -o <span class="string">&#x27;my.py&#x27;</span> -b chromium https:</span><br></pre></td></tr></table></figure><p>Command line input will automatically open the browser, and then you can see that every move on the browser will be automatically translated into code, as shown below.</p><p><img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/91da3a38f52f4c598fd34017b7139b1d~tplv-k3u1fbpfcp-zoom-in-crop-mark:1512:0:0:0.webp"></p><p>Automatically close the browser when finished and save the generated automation script to a py file.</p><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line">from playwright import sync_playwright</span><br><span class="line"></span><br><span class="line">def <span class="built_in">run</span>(playwright):</span><br><span class="line">    browser = playwright.chromium.<span class="built_in">launch</span>(headless=False)</span><br><span class="line">    context = browser.<span class="built_in">newContext</span>()</span><br><span class="line"></span><br><span class="line">    # Open new page</span><br><span class="line">    page = context.<span class="built_in">newPage</span>()</span><br><span class="line"></span><br><span class="line">    page.<span class="built_in">goto</span>(<span class="string">&quot;https://www.baidu.com/&quot;</span>)</span><br><span class="line"></span><br><span class="line">    page.<span class="built_in">click</span>(<span class="string">&quot;input[name=\&quot;wd\&quot;]&quot;</span>)</span><br><span class="line"></span><br><span class="line">    page.<span class="built_in">fill</span>(<span class="string">&quot;input[name=\&quot;wd\&quot;]&quot;</span>, <span class="string">&quot;jingdong&quot;</span>)</span><br><span class="line"></span><br><span class="line">    page.<span class="built_in">click</span>(<span class="string">&quot;text=\&quot;京东\&quot;&quot;</span>)</span><br><span class="line"></span><br><span class="line">    # Click //a[<span class="built_in">normalize-space</span>(.)=<span class="string">&#x27;京东JD.COM官网 多快好省 只为品质生活&#x27;</span>]</span><br><span class="line">    with page.<span class="built_in">expect_navigation</span>():</span><br><span class="line">        with page.<span class="built_in">expect_popup</span>() as popup_info:</span><br><span class="line">            page.<span class="built_in">click</span>(<span class="string">&quot;//a[normalize-space(.)=&#x27;京东JD.COM官网 多快好省 只为品质生活&#x27;]&quot;</span>)</span><br><span class="line">        page1 = popup_info.value</span><br><span class="line">    # ---------------------</span><br><span class="line">    context.<span class="built_in">close</span>()</span><br><span class="line">    browser.<span class="built_in">close</span>()</span><br><span class="line"></span><br><span class="line">with <span class="built_in">sync_playwright</span>() as playwright:</span><br><span class="line">    <span class="built_in">run</span>(playwright)</span><br></pre></td></tr></table></figure><p>In addition, <code>playwright</code> provides both synchronous and asynchronous API interfaces, documented below.</p><blockquote><p>链接：<a href="https://link.juejin.cn/?target=https://microsoft.github.io/playwright-python/index.html" title="https://microsoft.github.io/playwright-python/index.html">microsoft.github.io&#x2F;playwright-…</a></p></blockquote><h3 id="Synchronization"><a href="#Synchronization" class="headerlink" title="Synchronization"></a>Synchronization</h3><p>The following sample code: open three browsers in turn, go to baidu search, screenshot and exit.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> playwright <span class="keyword">import</span> sync_playwright</span><br><span class="line"></span><br><span class="line"><span class="keyword">with</span> sync_playwright() <span class="keyword">as</span> p:</span><br><span class="line">    <span class="keyword">for</span> browser_type <span class="keyword">in</span> [p.chromium, p.firefox, p.webkit]:</span><br><span class="line">        browser = browser_type.launch()</span><br><span class="line">        page = browser.newPage()</span><br><span class="line">        page.goto(<span class="string">&#x27;https://baidu.com/&#x27;</span>)</span><br><span class="line">        page.screenshot(path=<span class="string">f&#x27;example-<span class="subst">&#123;browser_type.name&#125;</span>.png&#x27;</span>)</span><br><span class="line">        browser.close()</span><br></pre></td></tr></table></figure><h3 id="Asynchronous"><a href="#Asynchronous" class="headerlink" title="Asynchronous"></a>Asynchronous</h3><p>Asynchronous operations can be combined with <code>asyncio</code> to perform three browser operations simultaneously.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> asyncio</span><br><span class="line"><span class="keyword">from</span> playwright <span class="keyword">import</span> async_playwright</span><br><span class="line"></span><br><span class="line"><span class="keyword">async</span> <span class="keyword">def</span> <span class="title function_">main</span>():</span><br><span class="line">    <span class="keyword">async</span> <span class="keyword">with</span> async_playwright() <span class="keyword">as</span> p:</span><br><span class="line">        <span class="keyword">for</span> browser_type <span class="keyword">in</span> [p.chromium, p.firefox, p.webkit]:</span><br><span class="line">            browser = <span class="keyword">await</span> browser_type.launch()</span><br><span class="line">            page = <span class="keyword">await</span> browser.newPage()</span><br><span class="line">            <span class="keyword">await</span> page.goto(<span class="string">&#x27;http://baidu.com/&#x27;</span>)</span><br><span class="line">            <span class="keyword">await</span> page.screenshot(path=<span class="string">f&#x27;example-<span class="subst">&#123;browser_type.name&#125;</span>.png&#x27;</span>)</span><br><span class="line">            <span class="keyword">await</span> browser.close()</span><br><span class="line"></span><br><span class="line">asyncio.get_event_loop().run_until_complete(main())</span><br></pre></td></tr></table></figure><h3 id="Mobile"><a href="#Mobile" class="headerlink" title="Mobile"></a>Mobile</h3><p>What’s more, <code>playwright</code> also supports browser emulation on mobile. Here’s a snippet of code from the official documentation that simulates the Safari browser on an iphone 11 pro at a given geographic location, first navigating to <code>maps.google.com</code>, then executing the location and taking a screenshot.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> playwright <span class="keyword">import</span> sync_playwright</span><br><span class="line"></span><br><span class="line"><span class="keyword">with</span> sync_playwright() <span class="keyword">as</span> p:</span><br><span class="line">    iphone_11 = p.devices[<span class="string">&#x27;iPhone 11 Pro&#x27;</span>]</span><br><span class="line">    browser = p.webkit.launch(headless=<span class="literal">False</span>)</span><br><span class="line">    context = browser.newContext(</span><br><span class="line">        **iphone_11,</span><br><span class="line">        locale=<span class="string">&#x27;en-US&#x27;</span>,</span><br><span class="line">        geolocation=&#123; <span class="string">&#x27;longitude&#x27;</span>: <span class="number">12.492507</span>, <span class="string">&#x27;latitude&#x27;</span>: <span class="number">41.889938</span> &#125;,</span><br><span class="line">        permissions=[<span class="string">&#x27;geolocation&#x27;</span>]</span><br><span class="line">    )</span><br><span class="line">    page = context.newPage()</span><br><span class="line">    page.goto(<span class="string">&#x27;https://maps.google.com&#x27;</span>)</span><br><span class="line">    page.click(<span class="string">&#x27;text=&quot;Your location&quot;&#x27;</span>)</span><br><span class="line">    page.screenshot(path=<span class="string">&#x27;colosseum-iphone.png&#x27;</span>)</span><br><span class="line">    browser.close()</span><br></pre></td></tr></table></figure><p>It can also be used with the <code>pytest</code> plugin, so you can try it yourself.</p><h2 id="3-Summary"><a href="#3-Summary" class="headerlink" title="3. Summary"></a>3. Summary</h2><p><code>playwright</code> has many advantages over existing automated testing tools, such as:</p><ul><li>Cross-browser, supports Chromium, Firefox, WebKit.</li><li>Cross-operating system, supports Linux, Mac, Windows.</li><li>can provide recording code generation function, free hands</li><li>Can be used for mobile</li></ul><p>Currently there are shortcomings is that the ecology and documentation is not very complete, such as no API Chinese documentation, no better tutorials and examples for learning. However, I believe that as more and more people know, the future will be better and better.</p><blockquote><p>GitHub Link：<a href="https://link.juejin.cn/?target=https://github.com/microsoft/playwright-python" title="https://github.com/microsoft/playwright-python">github.com&#x2F;microsoft&#x2F;p…</a><br>open source organization：Microsoft</p></blockquote>]]></content>
    
    
    <summary type="html">Hello everyone, I am brother boy. I believe that friends who have played the crawler know selenium, an automated testing tool. Write a Python automation script to free your hands is basically a routine operation, crawlers can not crawl, use automated testing to get together. Although selenium has a complete documentation, but also requires a certain learning cost, for a pure white in terms of some threshold.…</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="improve" scheme="https://www.nablepart.com/tags/improve/"/>
    
    <category term="machine" scheme="https://www.nablepart.com/tags/machine/"/>
    
    <category term="Python" scheme="https://www.nablepart.com/tags/Python/"/>
    
    <category term="monitoring" scheme="https://www.nablepart.com/tags/monitoring/"/>
    
    <category term="facilities" scheme="https://www.nablepart.com/tags/facilities/"/>
    
    <category term="intelligence" scheme="https://www.nablepart.com/tags/intelligence/"/>
    
    <category term="based" scheme="https://www.nablepart.com/tags/based/"/>
    
    <category term="learning" scheme="https://www.nablepart.com/tags/learning/"/>
    
  </entry>
  
  <entry>
    <title>I used Python to crawl the girl network 100G sets of pictures</title>
    <link href="https://www.nablepart.com/93e2b2682883/"/>
    <id>https://www.nablepart.com/93e2b2682883/</id>
    <published>2023-11-04T16:06:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/05/cxQ2T1yfdtn7ZHF.png"></p><h2 id="Preface"><a href="#Preface" class="headerlink" title="Preface"></a><strong>Preface</strong></h2><p>Recently, I’ve been working on surveillance-related packages and found that many scripts are based on Python. I heard its great name a long time ago, life is short, I learn Python, this is not a joke. With the rise of Artificial Intelligence, Machine Learning, Deep Learning, most of the AI code on the market is written in Python. So in the age of artificial intelligence, it’s time to learn some Python.</p><p>**Advancement Guide</p><p>For those who don’t have any language development experience, it is recommended to learn it systematically from the beginning, whether it is a book, video or text tutorial.</p><p>For students with development experience in other languages, it is recommended to start with a case study, such as crawling a set of images from a certain website.</p><p>Because the language is figured out, grammar and so on, as long as you want to sense of language, the code can basically read a eight or nine.</p><p>So it is not recommended that experienced developers learn from scratch, whether it is a video or a book, for the beginning of learning a language is too much time.</p><p>Of course, when you go deeper into it, you still need to learn it systematically, which is an afterthought.</p><h2 id="Software-tools"><a href="#Software-tools" class="headerlink" title="Software tools*"></a><strong>Software tools</strong>*</h2><h4 id="Python3"><a href="#Python3" class="headerlink" title="Python3"></a><strong>Python3</strong></h4><p>The latest version of Python 3.7.1 is chosen here.</p><p>Recommended installation tutorial:</p><p><a href="http://www.runoob.com/python3/python3-install.html">http://www.runoob.com/python3/python3-install.html</a></p><p>Win Download Address:</p><p><a href="https://www.python.org/downloads/windows">https://www.python.org/downloads/windows</a></p><p>Linux download address:</p><p><a href="https://www.python.org/downloads/source">https://www.python.org/downloads/source</a></p><h4 id="PyCharm"><a href="#PyCharm" class="headerlink" title="PyCharm"></a><strong>PyCharm</strong></h4><p>Visualization development tools:</p><p><a href="http://www.jetbrains.com/pycharm">http://www.jetbrains.com/pycharm</a></p><h2 id="Cases"><a href="#Cases" class="headerlink" title="Cases"></a><strong>Cases</strong></h2><p><strong>Realization steps</strong></p><p>Take the girl picture as an example, it is actually very simple, divided into the following four steps:</p><ul><li>Get the number of pages on the home page, and create a folder corresponding to the page number</li><li>Get the column address of the page</li><li>into the column, get the column page number (each column has multiple pictures under the page display)</li><li>get to the columns under the pair of tags in the picture and download</li></ul><h3 id="Note"><a href="#Note" class="headerlink" title="**Note **"></a>**Note **</h3><p>Crawling process, also need to pay attention to the following points, may be helpful to you:</p><ol><li>guide library, in fact, it is similar to the framework or tools in Java, the bottom are encapsulated</li></ol><p>Installation of third-party libraries</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Win下直接装的 python3</span></span><br><span class="line">pip install bs4、pip install requests</span><br><span class="line"><span class="comment"># Linux python2 python3 共存</span></span><br><span class="line">pip3 install bs4、pip3 install requests</span><br></pre></td></tr></table></figure><p>Importing third-party libraries</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 导入requests库</span></span><br><span class="line"><span class="keyword">import</span> requests</span><br><span class="line"><span class="comment"># 导入文件操作库</span></span><br><span class="line"><span class="keyword">import</span> os</span><br><span class="line"><span class="comment"># bs4全名BeautifulSoup，是编写python爬虫常用库之一，主要用来解析html标签。</span></span><br><span class="line"><span class="keyword">import</span> bs4from bs4 </span><br><span class="line"><span class="keyword">import</span> BeautifulSoup</span><br><span class="line"><span class="comment"># 基础类库</span></span><br><span class="line"><span class="keyword">import</span> sys</span><br><span class="line"><span class="comment"># Python 3.x 解决中文编码问题</span></span><br><span class="line"><span class="keyword">import</span> importlibimportlib.reload(sys)</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>2）Define the method function, a crawler may be several hundred lines, so try not to write a bunch of</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">def</span> <span class="title function_">download</span>(<span class="params">page_no, file_path</span>):    <span class="comment"># 这里写代码逻辑</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="3"><li>Define global variables</li></ol><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 给请求指定一个请求头来模拟chrome浏览器</span></span><br><span class="line"><span class="keyword">global</span> headers </span><br><span class="line"><span class="comment"># 告诉编译器这是全局变量 </span></span><br><span class="line">headers headers = &#123;<span class="string">&#x27;User-Agent&#x27;</span>: <span class="string">&#x27;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36&#x27;</span>&#125;</span><br><span class="line"><span class="comment"># 函数内使用之前需要</span></span><br><span class="line"><span class="comment"># 告诉编译器我在这个方法中使用的a是刚才定义的全局变量 headers ，而不是方法内部的局部变量。</span></span><br><span class="line"><span class="keyword">global</span> headers</span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="4"><li>Anti-theft chain</li></ol><p>Some sites have anti-piracy links, the omnipotent python solution</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">headers = &#123;<span class="string">&#x27;Referer&#x27;</span>: href&#125;img = requests.get(url, headers=headers)</span><br></pre></td></tr></table></figure><ol start="5"><li>Switching Versions</li></ol><p>The Linux server is using AliCloud server, the default version python2, python3 install your own</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@AY140216131049Z mzitu]<span class="comment"># python2 -VPython 2.7.5</span></span><br><span class="line">[root@AY140216131049Z mzitu]<span class="comment"># python3 -VPython 3.7.1</span></span><br><span class="line"><span class="comment"># 默认版本</span></span><br><span class="line">[root@AY140216131049Z mzitu]<span class="comment"># python -VPython 2.7.5</span></span><br><span class="line"><span class="comment"># 临时切换版本 &lt;whereis python&gt;</span></span><br><span class="line">[root@AY140216131049Z mzitu]<span class="comment"># alias python=&#x27;/usr/local/bin/python3.7&#x27;</span></span><br><span class="line">[root@AY140216131049Z mzitu]<span class="comment"># python -VPython 3.7.1</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="6"><li>Abnormal Capture</li></ol><p>In the process of crawling there may be an exception page, here we capture, does not affect the subsequent operations</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span>:    <span class="comment"># 业务逻辑except Exception as e:   print(e)</span></span><br></pre></td></tr></table></figure><h3 id="Code-implementation"><a href="#Code-implementation" class="headerlink" title="Code implementation"></a><strong>Code implementation</strong></h3><p>Edit script: vi mzitu.py</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># coding=utf-8</span></span><br><span class="line"><span class="comment"># !/usr/bin/python</span></span><br><span class="line"><span class="comment"># 导入requests库</span></span><br><span class="line"><span class="keyword">import</span> requests</span><br><span class="line"><span class="comment"># 导入文件操作库</span></span><br><span class="line"><span class="keyword">import</span> os</span><br><span class="line"><span class="keyword">import</span> BeautifulSoup <span class="keyword">from</span> bs4</span><br><span class="line"><span class="keyword">import</span> sys</span><br><span class="line"><span class="keyword">import</span> importlib</span><br><span class="line">importlib.reload(sys)</span><br><span class="line"><span class="comment"># 给请求指定一个请求头来模拟chrome浏览器</span></span><br><span class="line"><span class="keyword">global</span> headersheaders = &#123;</span><br><span class="line">    <span class="string">&#x27;User-Agent&#x27;</span>: <span class="string">&#x27;Mozilla/5.0 (Windows NT 10.0; Win64; x64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36&#x27;</span>&#125;</span><br><span class="line"><span class="comment"># 爬图地址</span></span><br><span class="line">mziTu = <span class="string">&#x27;http://www.mzitu.com/&#x27;</span></span><br><span class="line"><span class="comment"># 定义存储位置</span></span><br><span class="line"><span class="keyword">global</span> save_pathsave_path = <span class="string">&#x27;/mnt/data/mzitu&#x27;</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># 创建文件夹</span></span><br><span class="line"><span class="keyword">def</span> <span class="title function_">createFile</span>(<span class="params">file_path</span>):</span><br><span class="line">    <span class="keyword">if</span> os.path.exists(file_path)</span><br><span class="line">    <span class="keyword">is</span> <span class="literal">False</span>:</span><br><span class="line">        os.makedirs(file_path)</span><br><span class="line">        <span class="comment"># 切换路径至上面创建的文件夹    </span></span><br><span class="line">        os.chdir(file_path)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 下载文件</span></span><br><span class="line"><span class="keyword">def</span> <span class="title function_">download</span>(<span class="params">page_no, file_path</span>):</span><br><span class="line">    <span class="keyword">global</span> headers</span><br><span class="line">    res_sub = requests.get(page_no, headers=headers) </span><br><span class="line">    <span class="comment"># 解析html    </span></span><br><span class="line">    soup_sub = BeautifulSoup(res_sub.text, <span class="string">&#x27;html.parser&#x27;</span>)</span><br><span class="line">    <span class="comment"># 获取页面的栏目地址    </span></span><br><span class="line">    all_a = soup_sub.find(<span class="string">&#x27;div&#x27;</span>, class_=<span class="string">&#x27;postlist&#x27;</span>).find_all(<span class="string">&#x27;a&#x27;</span>, target=<span class="string">&#x27;_blank&#x27;</span>)</span><br><span class="line">    count = <span class="number">0</span></span><br><span class="line">    <span class="keyword">for</span> a <span class="keyword">in</span> all_a:</span><br><span class="line">        count = count + <span class="number">1</span></span><br><span class="line">        <span class="keyword">if</span> (count % <span class="number">2</span>) == <span class="number">0</span>:</span><br><span class="line">            <span class="built_in">print</span>(<span class="string">&quot;内页第几页：&quot;</span> + <span class="built_in">str</span>(count))</span><br><span class="line">            <span class="comment"># 提取href            </span></span><br><span class="line">            href = a.attrs[<span class="string">&#x27;href&#x27;</span>]</span><br><span class="line">            <span class="built_in">print</span>(<span class="string">&quot;套图地址：&quot;</span> + href)</span><br><span class="line">            res_sub_1 = requests.get(href, headers=headers)</span><br><span class="line">            soup_sub_1 = BeautifulSoup(res_sub_1.text, <span class="string">&#x27;html.parser&#x27;</span>)</span><br><span class="line">            <span class="comment"># ------ 这里最好使用异常处理 ------            </span></span><br><span class="line">            <span class="keyword">try</span>: </span><br><span class="line">                <span class="comment"># 获取套图的最大数量                </span></span><br><span class="line">                pic_max = soup_sub_1.find(<span class="string">&#x27;div&#x27;</span>, class_=<span class="string">&#x27;pagenavi&#x27;</span>).find_all(<span class="string">&#x27;span&#x27;</span>)[<span class="number">6</span>].text</span><br><span class="line">                <span class="built_in">print</span>(<span class="string">&quot;套图数量：&quot;</span> + pic_max)</span><br><span class="line">                <span class="keyword">for</span> j <span class="keyword">in</span> <span class="built_in">range</span>(<span class="number">1</span>, <span class="built_in">int</span>(pic_max) + <span class="number">1</span>):</span><br><span class="line">                    <span class="comment"># print(&quot;子内页第几页：&quot; + str(j))</span></span><br><span class="line">                    <span class="comment"># j int类型需要转字符串</span></span><br><span class="line">                    href_sub = href + <span class="string">&quot;/&quot;</span> + <span class="built_in">str</span>(j)</span><br><span class="line">                    <span class="built_in">print</span>(href_sub)</span><br><span class="line">                    res_sub_2 = requests.get(href_sub, headers=headers)</span><br><span class="line">                    soup_sub_2 = BeautifulSoup(res_sub_2.text, <span class="string">&quot;html.parser&quot;</span>)</span><br><span class="line">                    img = soup_sub_2.find(<span class="string">&#x27;div&#x27;</span>, class_=<span class="string">&#x27;main-image&#x27;</span>).find(<span class="string">&#x27;img&#x27;</span>)</span><br><span class="line">                    <span class="keyword">if</span> <span class="built_in">isinstance</span>(img, bs4.element.Tag):</span><br><span class="line">                        <span class="comment"># 提取src                       </span></span><br><span class="line">                        url = img.attrs[<span class="string">&#x27;src&#x27;</span>]</span><br><span class="line">                        array = url.split(<span class="string">&#x27;/&#x27;</span>)</span><br><span class="line">                        file_name = array[<span class="built_in">len</span>(array) - <span class="number">1</span>]</span><br><span class="line">                        <span class="comment"># print(file_name)                        </span></span><br><span class="line">                        <span class="comment"># 防盗链加入Referer                        </span></span><br><span class="line">                        headers = &#123;<span class="string">&#x27;Referer&#x27;</span>: href&#125;</span><br><span class="line">                        img = requests.get(url, headers=headers)</span><br><span class="line">                        <span class="comment"># print(&#x27;开始保存图片&#x27;)                       </span></span><br><span class="line">                        f = <span class="built_in">open</span>(file_name, <span class="string">&#x27;ab&#x27;</span>)</span><br><span class="line">                        f.write(img.content) </span><br><span class="line">                        <span class="comment"># print(file_name, &#x27;图片保存成功！&#x27;)</span></span><br><span class="line">                        f.close()</span><br><span class="line">            <span class="keyword">except</span> Exception <span class="keyword">as</span> e:</span><br><span class="line">                <span class="built_in">print</span>(e)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 主方法</span></span><br><span class="line"><span class="keyword">def</span> <span class="title function_">main</span>():</span><br><span class="line">    res = requests.get(mziTu, headers=headers)</span><br><span class="line">    <span class="comment"># 使用自带的html.parser解析    </span></span><br><span class="line">    soup = BeautifulSoup(res.text, <span class="string">&#x27;html.parser&#x27;</span>)</span><br><span class="line">    <span class="comment"># 创建文件夹    </span></span><br><span class="line">    createFile(save_path)</span><br><span class="line">    <span class="comment"># 获取首页总页数    </span></span><br><span class="line">    img_max = soup.find(<span class="string">&#x27;div&#x27;</span>, class_=<span class="string">&#x27;nav-links&#x27;</span>).find_all(<span class="string">&#x27;a&#x27;</span>)[<span class="number">3</span>].text</span><br><span class="line">    <span class="comment"># print(&quot;总页数:&quot;+img_max)    </span></span><br><span class="line">    <span class="keyword">for</span> i <span class="keyword">in</span> <span class="built_in">range</span>(<span class="number">1</span>, <span class="built_in">int</span>(img_max) + <span class="number">1</span>):</span><br><span class="line">        <span class="comment"># 获取每页的URL地址        </span></span><br><span class="line">        <span class="keyword">if</span> i == <span class="number">1</span>:</span><br><span class="line">            page = mziTu</span><br><span class="line">        <span class="keyword">else</span>:</span><br><span class="line">        page = mziTu + <span class="string">&#x27;page/&#x27;</span> + <span class="built_in">str</span>(i)</span><br><span class="line">        file = save_path + <span class="string">&#x27;/&#x27;</span> + <span class="built_in">str</span>(i)</span><br><span class="line">        createFile(file)</span><br><span class="line">        <span class="comment"># 下载每页的图片        </span></span><br><span class="line">        <span class="built_in">print</span>(<span class="string">&quot;套图页码：&quot;</span> + page)</span><br><span class="line">        download(page, file)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">&#x27;__main__&#x27;</span>:</span><br><span class="line">    main() </span><br></pre></td></tr></table></figure><p>The script runs under the Linux server by executing the following command</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">python <span class="number">3</span> mzitu.py </span><br><span class="line"><span class="comment"># 或者后台执行</span></span><br><span class="line">nohup python3 -u mzitu.py &gt; mzitu.log <span class="number">2</span>&gt;&amp;<span class="number">1</span> &amp;</span><br></pre></td></tr></table></figure><p>Currently only crawled a column of sets of pictures, a total of 17G, 5332 pictures.</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@itstyle mzitu]<span class="comment"># du -sh 17G     .</span></span><br><span class="line">[root@itstyle mzitu]<span class="comment"># ll -stotal 5332</span></span><br></pre></td></tr></table></figure><p>Below, please keep your eyes open for the cockamamie set moment.</p><p><img src="https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/gold-user-assets/2018/11/12/16707ed15707ecaa~tplv-t2oaga2asx-jj-mark:3024:0:0:0:q75.png"></p><p><img src="https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/gold-user-assets/2018/11/12/16707ed15710de0b~tplv-t2oaga2asx-jj-mark:3024:0:0:0:q75.png"></p><h2 id="Summary"><a href="#Summary" class="headerlink" title="Summary*"></a><strong>Summary</strong>*</h2><p>As a beginner, the script must have more or less some problems or places to be optimized, such as the encounter Python aunt, but also please more guidance.</p><p>In fact, the script is very simple, from the configuration of the environment, the installation of the integrated development environment, write the script to the smooth execution of the entire script, almost spent four or five hours, and ultimately the script a sinewy execution. Limited to the server bandwidth and configuration of the impact of the 17G figure almost downloaded three or four hours, as for the rest of the 83G, partners download it on their own.<br><img src="https://s2.loli.net/2023/11/04/vby3RNFwuTiUzr6.png"></p><p>—END—</p>]]></content>
    
    
    <summary type="html">Preface Recently, I have been working on monitoring related support facilities and found that many scripts are based on Python. A long time ago I heard of its great name, life is short, I learn Python, this is not a joke. With the rise of artificial intelligence, machine learning, deep learning, most of the artificial intelligence currently on the market</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="improve" scheme="https://www.nablepart.com/tags/improve/"/>
    
    <category term="machine" scheme="https://www.nablepart.com/tags/machine/"/>
    
    <category term="Python" scheme="https://www.nablepart.com/tags/Python/"/>
    
    <category term="monitoring" scheme="https://www.nablepart.com/tags/monitoring/"/>
    
    <category term="facilities" scheme="https://www.nablepart.com/tags/facilities/"/>
    
    <category term="intelligence" scheme="https://www.nablepart.com/tags/intelligence/"/>
    
    <category term="based" scheme="https://www.nablepart.com/tags/based/"/>
    
    <category term="learning" scheme="https://www.nablepart.com/tags/learning/"/>
    
  </entry>
  
  <entry>
    <title>Getting Started with K8S in Detail, Building a Cluster on a Local VM, and Practicing the Grey Release Feature</title>
    <link href="https://www.nablepart.com/cb359f54a601/"/>
    <id>https://www.nablepart.com/cb359f54a601/</id>
    <published>2023-11-04T16:05:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/05/gJD3OnLoejqkHVs.webp"></p><h1 id="introduction"><a href="#introduction" class="headerlink" title="introduction"></a>introduction</h1><h2 id="Using-minikube"><a href="#Using-minikube" class="headerlink" title="Using minikube"></a>Using minikube</h2><blockquote><p>Minikube is a lightweight Kubernetes implementation that creates VMs and deploys simple clusters of just one node on your local machine.</p></blockquote><p>It is recommended to use minikube standalone to experience the basic functions first.， <a href="https://kubernetes.io/zh-cn/docs/tutorials/hello-minikube/" title="https://kubernetes.io/zh-cn/docs/tutorials/hello-minikube/">kubernetes.io</a> ，You will still need to use minikube to compare and understand it when debugging later.</p><p>install  minikube empress</p><figure class="highlight ruby"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">minicube start</span><br><span class="line">minikube dashboard</span><br><span class="line">/<span class="regexp">/Opening http:/</span><span class="regexp">/127.0.0.1:51200/api</span><span class="regexp">/v1/namespaces</span><span class="regexp">/kubernetes-dashboard/services</span><span class="regexp">/http:kubernetes-dashboard:/proxy</span><span class="regexp">/ in your default browser...</span></span><br><span class="line"><span class="regexp"></span></span><br><span class="line"><span class="regexp">/</span><span class="regexp">/ 端口随机</span></span><br></pre></td></tr></table></figure><p>For the dashboard initialized by minikube, there is no need to deal with permissions and login, so it can be used directly. If you build a bare metal dashboard, it will be extremely complicated to configure the permissions and logins for the dashboard.</p><p><img src="https://s2.loli.net/2023/11/05/Vk1jwLHSJYBzpAa.webp"></p><p>The page is mainly about adding, deleting, and checking resources of various classes, which is helpful when you are not familiar with the commands.</p><p><img src="https://s2.loli.net/2023/11/05/NhAH2R3rmS9YkqQ.webp"></p><h2 id="Prepare-the-docker-image"><a href="#Prepare-the-docker-image" class="headerlink" title="Prepare the docker image"></a>Prepare the docker image</h2><p>Here we create an image of Aliyun for debugging.</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">// 两个版本，用于测试更新</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/marquezyang/common:v1</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/marquezyang/common:v2</span><br></pre></td></tr></table></figure><p>Simple node service, port 8080, http output current node ip and hostname, v2 will show v2</p><figure class="highlight perl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">index</span> page / <span class="keyword">index</span> page <span class="number">v2</span></span><br><span class="line"></span><br><span class="line">IP lo1<span class="number">0.244</span>.<span class="number">0.158</span>, hostname: test-k8s-5cc7cf6cf9-8d84m</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="Deploying-services"><a href="#Deploying-services" class="headerlink" title="Deploying services"></a>Deploying services</h2><p>Create namespace for easy management and cleanup</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create <span class="keyword">namespace</span> test</span><br></pre></td></tr></table></figure><p>If the system is easy to install <a href="https://link.juejin.cn/?target=https://github.com/ahmetb/kubectx" title="https://github.com/ahmetb/kubectx">kubectx</a> ， Can be installed after kubens test to switch namespace, not convenient to install the subsequent command plus -n test specify namespace for test.</p><p>Creating a yaml configuration file locally for kubectl apply -f file.yaml startup is equivalent to creating it on the command line.</p><p>appV1.yaml</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">apps/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Deployment</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">labels:</span></span><br><span class="line">    <span class="attr">app:</span> <span class="string">test-k8s</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">name:</span> <span class="string">test-k8s</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">replicas:</span> <span class="number">3</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">selector:</span></span><br><span class="line">    <span class="attr">matchLabels:</span></span><br><span class="line">      <span class="attr">app:</span> <span class="string">test-k8s</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">template:</span></span><br><span class="line">    <span class="attr">metadata:</span></span><br><span class="line">      <span class="attr">labels:</span></span><br><span class="line">        <span class="attr">app:</span> <span class="string">test-k8s</span></span><br><span class="line">    <span class="attr">spec:</span></span><br><span class="line"></span><br><span class="line">      <span class="attr">containers:</span></span><br><span class="line">        <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">test-k8s</span></span><br><span class="line">          <span class="attr">image:</span> <span class="string">registry.cn-hangzhou.aliyuncs.com/marquezyang/common:v1</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Bottom to top:</p><ul><li>A single <code>pod</code> is the smallest unit of a k8s deployment and contains one or more containers representing an application. For example, a wordpress docker deployment would have two containers, wordpress+mysql, for a single deployment.</li><li>pod has metadata, which is used to give a selector to the parent abstraction collection, which is then clustered for operation</li><li>replicas: 3 creates a <code>ReplicaSet</code> collection, which contains the same pods. In this case, it creates 3 identical pods, contained in a <code>ReplicaSet</code>.</li><li>At the top, create a <code>Deployment</code> that points to the created <code>ReplicaSet</code>.</li></ul><p>kubectl create</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f ./yaml/deploy/appv1.yaml -n <span class="built_in">test</span></span><br></pre></td></tr></table></figure><p>Find the single <code>Deployment</code> in the dashboard, click on it and scroll down to find the <code>ReplicaSet</code> that it points to, and click on it and scroll down to find the 3 pods that were created.</p><p><img src="https://s2.loli.net/2023/11/05/8FaqyVuKNpBAkU2.webp"></p><h2 id="Accessing-the-minikube-network"><a href="#Accessing-the-minikube-network" class="headerlink" title="Accessing the minikube network"></a>Accessing the minikube network</h2><p>minikube runs in docker and is network isolated. There are two ways to access the minikube network:</p><ul><li>minikube ssh, into the container bash</li><li>minikube tunnel</li></ul><p>Here we use minikube ssh to try to access a single pod and dashboard into the details of a particular pod. <img src="https://s2.loli.net/2023/11/05/yk9r1QqINMAYRE2.webp"></p><p>After minikube ssh into the bash curl the ip address of the pod to access the individual pods. <img src="https://s2.loli.net/2023/11/05/1ALgxW63rmPcF4C.webp"></p><h2 id="Creating-a-Service"><a href="#Creating-a-Service" class="headerlink" title="Creating a Service"></a>Creating a Service</h2><blockquote><p>The Service API is an integral part of Kubernetes and is an abstraction that helps you expose collections of Pods on the network. Each Service object defines a logical collection of endpoints (typically these endpoints are Pods) and a policy for how to access those Pods.</p></blockquote><p>Create service.yaml</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">deploy-service</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">selector:</span></span><br><span class="line"></span><br><span class="line">    <span class="attr">app:</span> <span class="string">test-k8s</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">type:</span> <span class="string">NodePort</span></span><br><span class="line">  <span class="attr">ports:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">port:</span> <span class="number">8080</span></span><br><span class="line">      <span class="attr">targetPort:</span> <span class="number">8080</span></span><br><span class="line">      <span class="attr">nodePort:</span> <span class="number">31123</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f ./yaml/deploy/service.yaml -n <span class="built_in">test</span></span><br><span class="line">kubectl get svc -n <span class="built_in">test</span></span><br></pre></td></tr></table></figure><p><img src="https://s2.loli.net/2023/11/05/GQVxNDHXrwObq7S.webp"></p><p>In minikube ssh, you can curl to the services exposed by servcie, and with <strong>load balancing</strong>, you can see that it is evenly distributed among the three pods 166, 167, and 168.<img src="https://s2.loli.net/2023/11/05/LKszRhuINUWyabx.webp"></p><p>You can also use minikube service to automatically open the page and browser access experience.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">minikube service deploy-service -n test</span><br></pre></td></tr></table></figure><h2 id="Creating-an-Ingress-Experience-Grey-Release"><a href="#Creating-an-Ingress-Experience-Grey-Release" class="headerlink" title="Creating an Ingress Experience Grey Release"></a>Creating an Ingress Experience Grey Release</h2><p>First, create a new deployment and service that uses the v2 img, and create a single file, appServiceV2.yaml.</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">apps/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Deployment</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">labels:</span></span><br><span class="line">    <span class="attr">app:</span> <span class="string">test-k8s-v2</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">name:</span> <span class="string">test-k8s-v2</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">replicas:</span> <span class="number">3</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">selector:</span></span><br><span class="line">    <span class="attr">matchLabels:</span></span><br><span class="line">      <span class="attr">app:</span> <span class="string">test-k8s-v2</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">template:</span></span><br><span class="line">    <span class="attr">metadata:</span></span><br><span class="line">      <span class="attr">labels:</span></span><br><span class="line">        <span class="attr">app:</span> <span class="string">test-k8s-v2</span></span><br><span class="line">    <span class="attr">spec:</span></span><br><span class="line"></span><br><span class="line">      <span class="attr">containers:</span></span><br><span class="line">        <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">test-k8s-v2</span></span><br><span class="line">          <span class="attr">image:</span> <span class="string">registry.cn-hangzhou.aliyuncs.com/marquezyang/common:v2</span></span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">test-k8s-v2</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">selector:</span></span><br><span class="line">    <span class="attr">app:</span> <span class="string">test-k8s-v2</span></span><br><span class="line"></span><br><span class="line">  <span class="attr">type:</span> <span class="string">NodePort</span></span><br><span class="line">  <span class="attr">ports:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">port:</span> <span class="number">8080</span></span><br><span class="line">      <span class="attr">targetPort:</span> <span class="number">8080</span></span><br><span class="line">      <span class="attr">nodePort:</span> <span class="number">32000</span></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f ./yaml/deploy/appServiceV2.yaml -n <span class="built_in">test</span></span><br><span class="line">kubectl get svc -n <span class="built_in">test</span></span><br></pre></td></tr></table></figure><p>At this point, there are two services, v1 and v2. <img src="https://s2.loli.net/2023/11/05/TNsGmoOZ7zxBMaI.webp"></p><p>Test the v2 service</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">minikube service test-k8s-v2 -n test</span><br><span class="line"></span><br></pre></td></tr></table></figure><blockquote><p>In local experience, if you refresh the tab a few more times in your browser, you can see that it is hitting different IPs (pods) evenly, and the page shows v2.</p><p>At this point, there are already two stable url load balancing to their respective pods. If you want to have a canary effect, where half of the page traffic is v1 and half is v2, you can do it with a local nginx. But k8s already provides a wrapper for this, called Ingress.</p><blockquote><p>Ingress is an API object that manages external access to services in the cluster, typically through HTTP. Ingress can provide load balancing, SSL termination, and name-based virtual hosting.</p></blockquote></blockquote><p><a href="https://link.juejin.cn/?target=https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" title="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/">kubernetes.io&#x2F;docs&#x2F;tasks&#x2F;…</a></p><p>first install ingress</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">minikube addons <span class="built_in">enable</span> ingress</span><br></pre></td></tr></table></figure><p>creater ingress1.yaml</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">networking.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Ingress</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">k8s-test</span></span><br><span class="line">  <span class="attr">annotations:</span></span><br><span class="line">    <span class="attr">nginx.ingress.kubernetes.io/rewrite-target:</span> <span class="string">/$1</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">rules:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">http:</span></span><br><span class="line">        <span class="attr">paths:</span></span><br><span class="line">          <span class="bullet">-</span> <span class="attr">path:</span> <span class="string">/</span></span><br><span class="line">            <span class="attr">pathType:</span> <span class="string">Prefix</span></span><br><span class="line">            <span class="attr">backend:</span></span><br><span class="line">              <span class="attr">service:</span></span><br><span class="line">                <span class="attr">name:</span> <span class="string">deploy-service</span></span><br><span class="line">                <span class="attr">port:</span></span><br><span class="line">                  <span class="attr">number:</span> <span class="number">8080</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>ingress2.yaml</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">networking.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Ingress</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line">  <span class="attr">name:</span> <span class="string">k8s-test-v2-canary</span></span><br><span class="line">  <span class="attr">annotations:</span></span><br><span class="line">    <span class="attr">nginx.ingress.kubernetes.io/rewrite-target:</span> <span class="string">/$1</span></span><br><span class="line">    <span class="attr">nginx.ingress.kubernetes.io/canary:</span> <span class="string">&#x27;true&#x27;</span></span><br><span class="line">    <span class="attr">nginx.ingress.kubernetes.io/canary-weight:</span> <span class="string">&#x27;50&#x27;</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line">  <span class="attr">rules:</span></span><br><span class="line">    <span class="bullet">-</span> <span class="attr">http:</span></span><br><span class="line">        <span class="attr">paths:</span></span><br><span class="line">          <span class="bullet">-</span> <span class="attr">path:</span> <span class="string">/</span></span><br><span class="line">            <span class="attr">pathType:</span> <span class="string">Prefix</span></span><br><span class="line">            <span class="attr">backend:</span></span><br><span class="line">              <span class="attr">service:</span></span><br><span class="line">                <span class="attr">name:</span> <span class="string">test-k8s-v2</span></span><br><span class="line">                <span class="attr">port:</span></span><br><span class="line">                  <span class="attr">number:</span> <span class="number">8080</span></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f ./yaml/deploy/ingress1.yaml -n <span class="built_in">test</span></span><br><span class="line">kubectl apply -f ./yaml/deploy/ingress2.yaml -n <span class="built_in">test</span></span><br><span class="line">kubectl get ingress -n <span class="built_in">test</span></span><br></pre></td></tr></table></figure><p><img src="https://s2.loli.net/2023/11/05/YSCMtH6Jrcw39zp.webp"></p><p>At this point, ADDRESS is minikube ip value 192.168.58.2 (docker’s intranet address, not reachable locally), which means it has been successful. And ingress is 80, port 443 by default. After minikube ssh into bash, curl localhost (80) several times.</p><p><img src="https://s2.loli.net/2023/11/05/XRIBzJlxhckweqy.webp"></p><p><strong>It can be seen that it hits v1,v2 evenly, and it also hits each IP (pod) evenly</strong>. We have achieved the expected gray-scale release effect, and the production environment gray-scale release effect is also basically reproduced. (Here you can also minikube tunnel after the browser to visit localhost experience, pay attention to lift the local 80 port occupation.)</p><p>Finally clean up the site kubectl delete namespace test can be. If you haven’t created a namespace before, it’s not as convenient to clean it up. The performance of k8s is actually quite high, and my private cloud VM can’t handle it at all, so I can shut it down in time.</p><h2 id="Bare-metal-setup"><a href="#Bare-metal-setup" class="headerlink" title="Bare metal setup"></a>Bare metal setup</h2><h2 id="Creating-a-Virtual-Machine"><a href="#Creating-a-Virtual-Machine" class="headerlink" title="Creating a Virtual Machine"></a>Creating a Virtual Machine</h2><p>I used the ESXi VM system from my private cloud and created three CentOS 7 VMs with at least 2c4g each. You can install them locally or consider renting a cluster from a cloud provider.</p><p><img src="https://s2.loli.net/2023/11/05/hRTPSYGUfbO5xgV.webp"></p><h2 id="Creating-a-Cluster"><a href="#Creating-a-Cluster" class="headerlink" title="Creating a Cluster"></a>Creating a Cluster</h2><p>Generally for the sake of understanding, multi-node bare-metal clusters are recommended to be built manually for the first time. But actually still use kubeadm init, is still mechanized copy command, in this network and system configuration is stuck in no sense, recommended to use a key script to build:</p><p><a href="https://link.juejin.cn/?target=https://github.com/lework/kainstall" title="https://github.com/lework/kainstall">github.com&#x2F;lework&#x2F;kain…</a></p><p>Go to the 192.168.31.153 terminal and execute the</p><figure class="highlight ini"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">export <span class="attr">MASTER_NODES</span>=<span class="string">&quot;192.168.31.153&quot;</span></span><br><span class="line">export <span class="attr">WORKER_NODES</span>=<span class="string">&quot;192.168.31.151,192.168.31.152&quot;</span></span><br><span class="line">export <span class="attr">SSH_USER</span>=<span class="string">&quot;root&quot;</span></span><br><span class="line">export <span class="attr">SSH_PASSWORD</span>=<span class="string">&quot;xxx&quot;</span></span><br><span class="line">export <span class="attr">SSH_PORT</span>=<span class="string">&quot;22&quot;</span></span><br><span class="line">export <span class="attr">KUBE_VERSION</span>=<span class="string">&quot;1.20.6&quot;</span></span><br><span class="line">bash kainstall-centos.sh init --version 1.24.8</span><br></pre></td></tr></table></figure><blockquote><p>Kubernetes executes your [workloads] by placing containers in Pods running on <a href="https://kubernetes.io/zh-cn/docs/concepts/workloads/">nodes</a> (Nodes) . A node can be a virtual machine or a physical machine, depending on the cluster configuration in which it resides. Each node contains a virtual machine running <a href="https://link.juejin.cn/?target=https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/" title="https:// kubernetes.io/zh-cn/docs/concepts/workloads/pods/">Pod</a>; these nodes are controlled by [control plane](<a href="https://link.juejin.cn/?target=https://kubernetes.io/zh">https://link.juejin.cn?target=https%3A%2F%2Fkubernetes.io%2Fzh</a> -cn%2Fdocs%2Freference%2Fglossary%2F%3Fall%3Dtrue%23term-control-plane “<a href="https://kubernetes.io/zh-cn/docs/reference/glossary/?all=">https://kubernetes.io/zh-cn/docs/reference/glossary/?all=</a> true#term-control-plane”) is responsible for management.</p></blockquote><p>Above, we used minikube as a single node in the local docker. In fact, the definition of a node is consistent with common network terminology and can refer to a single machine. If there are two work nodes in the cluster and a deployment wants to create four pods, those four pods will be deployed evenly across the two nodes (machines).</p><p>When you’re done building, check out the</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get nodes</span><br></pre></td></tr></table></figure><p><img src="https://s2.loli.net/2023/11/05/8vNsxVFuTaqnACY.webp"></p><p>The intranet IP is</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">192.168.31.153 k8s-master-node1</span><br><span class="line">192.168.31.151 k8s-worker-node1</span><br><span class="line">192.168.31.152 k8s-worker-node2</span><br></pre></td></tr></table></figure><h2 id="Using-dashboard"><a href="#Using-dashboard" class="headerlink" title="Using dashboard"></a>Using dashboard</h2><p>The script already has dashboard installed, but rbac is a bit tricky to configure.</p><figure class="highlight scss"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">yum install tmux</span><br><span class="line">tmux</span><br><span class="line"></span><br><span class="line">kubectl proxy <span class="attr">--address</span>=&#x27;<span class="number">0.0</span><span class="selector-class">.0</span><span class="selector-class">.0</span>&#x27;  <span class="attr">--accept-hosts</span>=&#x27;^*$&#x27; <span class="attr">--port</span>=<span class="number">8001</span></span><br><span class="line">ctrl+<span class="selector-tag">b</span> d</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>interviews <a href="https://link.juejin.cn/?target=http://192.168.31.153:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/%23/login" title="http://192.168.31.153:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login">http://192.168.31.153:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login</a></p><p>I found that it requires a login and is restricted to https or localhost only. Here’s how to get around it.</p><p><a href="https://link.juejin.cn/?target=https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/" title="https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/">kubernetes.io&#x2F;zh-cn&#x2F;docs&#x2F;…</a></p><p>Installation of the dashboard is usually done using a remote yaml such as</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml</span><br></pre></td></tr></table></figure><p>Download this locally, e.g. dashboard.yaml, search for ‘args’, there is only one place, add two lines</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">containers:</span></span><br><span class="line">  <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line">    <span class="attr">image:</span> <span class="string">kubernetesui/dashboard:v2.7.0</span></span><br><span class="line">    <span class="attr">imagePullPolicy:</span> <span class="string">Always</span></span><br><span class="line">    <span class="attr">ports:</span></span><br><span class="line">      <span class="bullet">-</span> <span class="attr">containerPort:</span> <span class="number">8443</span></span><br><span class="line">        <span class="attr">protocol:</span> <span class="string">TCP</span></span><br><span class="line">    <span class="attr">args:</span></span><br><span class="line">      <span class="bullet">-</span> <span class="string">--auto-generate-certificates</span></span><br><span class="line">      <span class="bullet">-</span> <span class="string">--namespace=kubernetes-dashboard</span></span><br><span class="line">      <span class="bullet">-</span> <span class="string">--enable-skip-login</span></span><br><span class="line">      <span class="bullet">-</span> <span class="string">--disable-settings-authorizer</span></span><br></pre></td></tr></table></figure><p>At this point, the login page can be skipped, but there are no data permissions once inside. You need to refer to this issue [[github.com&#x2F;kubernetes&#x2F;…](https:&#x2F;&#x2F; github.com&#x2F;kubernetes&#x2F;dashboard&#x2F;issues&#x2F;4179#issuecomment-610078007)]. </p><p>Create admin.yaml , copy the configuration in the comment above, kubectl apply -f , and then the unlogged dashboard is available.</p><p><img src="https://s2.loli.net/2023/11/05/XRnds8LoP7heYZ4.webp"></p><h2 id="Deploying-services-1"><a href="#Deploying-services-1" class="headerlink" title="Deploying services"></a>Deploying services</h2><p>Reuse the yaml from the minikube example above to create the two deployments and services Note that at this point, the performance of the appliance is likely to be far less than that of the previous standalone appliance, so you can set the number of pods to be smaller.</p><p><img src="https://s2.loli.net/2023/11/05/IJzRn1oStmEKdvM.webp"></p><p>As you can see, the pods pointed to by a single Service are on node1 , node2 , i.e. two different machines. The 153 terminal can directly connect to both service ip</p><p><img src="https://s2.loli.net/2023/11/05/puULwIj1DQ6O5Ci.webp"></p><h2 id="Deploying-Ingress"><a href="#Deploying-Ingress" class="headerlink" title="Deploying Ingress"></a>Deploying Ingress</h2><p>Reusing the same yaml as above, create a grayed-out Ingress, knowing that nginx is on node 192.168.31.151.</p><p><img src="https://s2.loli.net/2023/11/05/RpFw1BIdPJS3s2g.webp"></p><p>But at this point, curl 192.168.31.151 can’t connect, type in</p><figure class="highlight arduino"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get service -n ingress-nginx</span><br></pre></td></tr></table></figure><p><img src="https://s2.loli.net/2023/11/05/sun6zShky8WdB1c.webp"></p><p>ingress-nginx doesn’t have an external-ip, so I tested with Cluster-ip. Multiple curl 10.96.103.254</p><p><img src="https://s2.loli.net/2023/11/05/Vgv3fzZWilrpbe5.webp"></p><p>**It can be seen that the hits are evenly distributed to v1 ,v2 and also to each IP (pod) **. And at this point, the pods are actually distributed across two virtual machines, as expected.</p><p>At this point Nginx on node1, you can test the node2 shutdown, at the same time in the master continue to curl , you can find that you can still access the deployment of node1 pod, that is, disaster recovery high availability. Turn node2 on again and the cluster is restored.</p><h1 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h1><p>We have yet to build more complex scenarios such as data persistence and deployment of stateful applications. However, after the above, we already know the concepts of <code>pod</code> , <code>deployment</code>, <code>service</code>, <code>ingress</code>, and <code>node</code> in k8s very well, and we also successfully built a cluster and experienced the gray-scale release function, so it can be said that we have completely unlocked the skill tree of k8s. In the future, the articles recommended by the system will become its nutrients, and continue to grow, and eventually grow into a big tree.</p><p>Writing the article itself is also a learning process, and I would like to ask the readers to point out any mistakes or omissions in the article. If this article is helpful to you, welcome to like the collection.</p><p><img src="https://s2.loli.net/2023/11/05/bHOC5mxLtTPsp64.png"></p>]]></content>
    
    
    <summary type="html">Minikube is a lightweight Kubernetes implementation that creates VMs and deploys simple clusters of just one node on your local machine.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="lightweight" scheme="https://www.nablepart.com/tags/lightweight/"/>
    
    <category term="Kubernetes" scheme="https://www.nablepart.com/tags/Kubernetes/"/>
    
    <category term="implementation" scheme="https://www.nablepart.com/tags/implementation/"/>
    
    <category term="improve" scheme="https://www.nablepart.com/tags/improve/"/>
    
    <category term="VMs" scheme="https://www.nablepart.com/tags/VMs/"/>
    
    <category term="clusters" scheme="https://www.nablepart.com/tags/clusters/"/>
    
    <category term="machine" scheme="https://www.nablepart.com/tags/machine/"/>
    
  </entry>
  
  <entry>
    <title>Crawler you are still using selenium out</title>
    <link href="https://www.nablepart.com/8100e4205ef6/"/>
    <id>https://www.nablepart.com/8100e4205ef6/</id>
    <published>2023-11-04T16:04:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h3 id="Synopsis"><a href="#Synopsis" class="headerlink" title="Synopsis"></a>Synopsis</h3><p>Recently encountered a thing: my chrome browser upgraded, but the corresponding webdriver has not been upgraded, I can only be forced to accept the use of safari browser to implement the crawler.</p><p><img src="https://s2.loli.net/2023/11/05/ZU3PYOIykNp4gMn.webp"></p><p>Although it’s the browser that comes with mac, I’m so used to chrome that I can’t change my habits. But lately posting news is still forced to use safari as the browser.</p><p>I have also been from slenium as the crawler framework, it is the main webdriver, so there are a number of problems:</p><ol><li>configuration is more troublesome, for newbies may not be very friendly</li><li>the version must match the version of the browser. I have a period of time is because of chrome upgrade, but the driver did not upgrade resulting in the use of scripts can not be operated server</li><li>selenium new version of the api and the old version of the big difference. When I was solving the problem, I found that many of the code examples given in the old documentation are no longer available in the new version.</li></ol><p>Well, now the savior is here, selenium as a crawler tool has become history.</p><p>DrissionPage is a python based web automation tool. It can control the browser as well as send and receive packets, and it can combine the two into one. It combines the convenience of browser automation with the efficiency of requests. It’s powerful, with tons of built-in user-friendliness and convenience. Its syntax is clean and elegant, and its code is small and newbie-friendly.</p><p>This is quoted from the <a href="https://link.juejin.cn/?target=https://g1879.gitee.io/drissionpagedocs/" title="https://g1879.gitee.io/ drissionpagedocs/">official DrissionPage documentation</a>, but you’ll have to check it out to see how it works. Come explore with <code>shigen</code>.</p><h3 id="Installation"><a href="#Installation" class="headerlink" title="Installation"></a>Installation</h3><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">&amp;#xA0;pip install DrissionPage</span><br></pre></td></tr></table></figure><p><img src="https://s2.loli.net/2023/11/05/3V1MQGNCFnEpKJu.webp"></p><h3 id="Code-test"><a href="#Code-test" class="headerlink" title="Code test"></a>Code test</h3><p>According to the official case: <a href="https://link.juejin.cn/?target=https://g1879.gitee.io/drissionpagedocs/demos/maoyan_TOP100/" title=" https://g1879.gitee.io/drissionpagedocs/demos/maoyan_TOP100/">collect cat eye movie top100 list</a>, we directly copy and paste the code.</p><p><img src="https://s2.loli.net/2023/11/05/tgsihvzoP7uDe2U.webp"></p><p>I waited a few seconds for it to open a new web tab page, and it was paging like crazy, and then the data was all in the <code>data.csv</code>.<img src="https://s2.loli.net/2023/11/05/5UJBEtmY2HkVzGu.webp"></p><p>This is much easier than using the <code>requests</code> library before!</p><p>The <code>shigen</code> is straight up fun, and I’m going to have to build my own code for the next one! Crawling <a href="https://link.juejin.cn/?target=https://bz.zzzmh.cn/index" title="https://bz.zzzmh.cn/index">minimalist wallpaper</a>. But it’s also a bit heartbreaking for the author to have to endure such a traffic attack on a free site. Surely that proves the saying: **Free is the most expensive! **</p><p><img src="https://s2.loli.net/2023/11/05/qVnU6CHabKL3WZ4.webp"></p><p><img src="https://s2.loli.net/2023/11/05/JwazWVv1HsR7Ci6.webp"></p><p>The code is so simple, but in the end it did not download successfully, the front-end dealt with the address of the file.</p><p><img src="https://s2.loli.net/2023/11/05/BZI2paNYlVksDWP.webp"></p><p>I’m sure we’ll get a good solution later, and <code>shigen</code> will be updated continuously. Anyway, <code>drissionPage</code> is a great framework!</p><p>For more information on how to use it, you can also check out the documentation.</p>]]></content>
    
    
    <summary type="html">Crawler you are still using selenium, out! Take you to recognize a new domestic crawler framework, absolutely nice, absolutely simple and lightweight!</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="framework" scheme="https://www.nablepart.com/tags/framework/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="recognize" scheme="https://www.nablepart.com/tags/recognize/"/>
    
    <category term="Crawler" scheme="https://www.nablepart.com/tags/Crawler/"/>
    
    <category term="absolutely" scheme="https://www.nablepart.com/tags/absolutely/"/>
    
    <category term="selenium" scheme="https://www.nablepart.com/tags/selenium/"/>
    
  </entry>
  
  <entry>
    <title>ChatGPT + vector database to build a privatized knowledge base (II)</title>
    <link href="https://www.nablepart.com/977e746d284f/"/>
    <id>https://www.nablepart.com/977e746d284f/</id>
    <published>2023-11-04T16:03:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p><a href="https://juejin.cn/post/7227079326594859068" title="https://juejin.cn/post/7227079326594859068">ChatGPT+Vector database to build a privatized knowledge base</a> The meaning of vector databases has been introduced.<br>This time, let’s go hands-on and take a look at the schema design and interaction flow first</p></blockquote><h1 id="1-Table-Structure-Design"><a href="#1-Table-Structure-Design" class="headerlink" title="1. Table Structure Design"></a>1. Table Structure Design</h1><h2 id="1-MySQL-Table-Design"><a href="#1-MySQL-Table-Design" class="headerlink" title="1. MySQL Table Design"></a>1. MySQL Table Design</h2><h2 id="1-knowledge-base-Knowledge-Base-Summary-Table"><a href="#1-knowledge-base-Knowledge-Base-Summary-Table" class="headerlink" title="1. knowledge_base (Knowledge Base Summary Table)"></a>1. knowledge_base (Knowledge Base Summary Table)</h2><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">CREATE TABLE</span> `knowledge_base` (</span><br><span class="line">  `id` <span class="type">bigint</span> unsigned <span class="keyword">NOT NULL</span> AUTO_INCREMENT COMMENT <span class="string">&#x27;知识库id&#x27;</span>,</span><br><span class="line">  `create_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;创建时间&#x27;</span>,</span><br><span class="line">  `create_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;创建者&#x27;</span>,</span><br><span class="line">  `update_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> <span class="keyword">ON</span> <span class="keyword">UPDATE</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;修改时间&#x27;</span>,</span><br><span class="line">  `update_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;更新者&#x27;</span>,</span><br><span class="line">  `name` <span class="type">varchar</span>(<span class="number">50</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">NOT NULL</span> COMMENT <span class="string">&#x27;知识库名称&#x27;</span>,</span><br><span class="line">  `description` <span class="type">varchar</span>(<span class="number">255</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;知识库描述&#x27;</span>,</span><br><span class="line">  `vector_collection_name` <span class="type">varchar</span>(<span class="number">50</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;向量数据库的表名&#x27;</span>,</span><br><span class="line">  <span class="keyword">PRIMARY KEY</span> (`id`)</span><br><span class="line">) ENGINE<span class="operator">=</span>InnoDB  <span class="keyword">DEFAULT</span> CHARSET<span class="operator">=</span>utf8mb4 <span class="keyword">COLLATE</span><span class="operator">=</span>utf8mb4_0900_ai_ci COMMENT<span class="operator">=</span><span class="string">&#x27;知识库总表&#x27;</span>;</span><br></pre></td></tr></table></figure><h3 id="2、knowledge-file（Knowledge-base-document-management）"><a href="#2、knowledge-file（Knowledge-base-document-management）" class="headerlink" title="2、knowledge_file（Knowledge base document management）"></a>2、knowledge_file（Knowledge base document management）</h3><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">CREATE TABLE</span> `knowledge_file` (</span><br><span class="line">  `id` <span class="type">bigint</span> unsigned <span class="keyword">NOT NULL</span> AUTO_INCREMENT COMMENT <span class="string">&#x27;文件id&#x27;</span>,</span><br><span class="line">  `create_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;创建时间&#x27;</span>,</span><br><span class="line">  `create_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;创建者&#x27;</span>,</span><br><span class="line">  `update_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> <span class="keyword">ON</span> <span class="keyword">UPDATE</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;修改时间&#x27;</span>,</span><br><span class="line">  `update_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;更新者&#x27;</span>,</span><br><span class="line">  `knowledge_id` <span class="type">bigint</span> <span class="keyword">NOT NULL</span> COMMENT <span class="string">&#x27;知识库id&#x27;</span>,</span><br><span class="line">  `file_name` <span class="type">varchar</span>(<span class="number">65</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">NOT NULL</span> COMMENT <span class="string">&#x27;文件名&#x27;</span>,</span><br><span class="line">  `oss_id` <span class="type">bigint</span> <span class="keyword">NOT NULL</span> COMMENT <span class="string">&#x27;ossId&#x27;</span>,</span><br><span class="line">  `file_status` <span class="type">int</span> <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="string">&#x27;1&#x27;</span> COMMENT <span class="string">&#x27;0向量处理中，1未激活，2已完成，3失败&#x27;</span>,</span><br><span class="line">  `fail_reason` <span class="type">varchar</span>(<span class="number">100</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;失败原因&#x27;</span>,</span><br><span class="line">  `slice_type` <span class="type">int</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;切分类型：1分隔符，2字数&#x27;</span>,</span><br><span class="line">  `slice_value` <span class="type">varchar</span>(<span class="number">10</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;切分规则数据&#x27;</span>,</span><br><span class="line">  <span class="keyword">PRIMARY KEY</span> (`id`)</span><br><span class="line">) ENGINE<span class="operator">=</span>InnoDB  <span class="keyword">DEFAULT</span> CHARSET<span class="operator">=</span>utf8mb4 <span class="keyword">COLLATE</span><span class="operator">=</span>utf8mb4_0900_ai_ci COMMENT<span class="operator">=</span><span class="string">&#x27;知识库文件管理&#x27;</span>;</span><br></pre></td></tr></table></figure><h3 id="4、knowledge-file-slice-vector（Knowledge-base-document-slicing-steering-volume-data-table）"><a href="#4、knowledge-file-slice-vector（Knowledge-base-document-slicing-steering-volume-data-table）" class="headerlink" title="4、knowledge_file_slice_vector（Knowledge base document slicing steering volume data table）"></a>4、knowledge_file_slice_vector（Knowledge base document slicing steering volume data table）</h3><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">CREATE TABLE</span> `knowledge_file_slice_vector` (</span><br><span class="line">  `id` <span class="type">bigint</span> unsigned <span class="keyword">NOT NULL</span> AUTO_INCREMENT COMMENT <span class="string">&#x27;id&#x27;</span>,</span><br><span class="line">  `create_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;创建时间&#x27;</span>,</span><br><span class="line">  `create_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;创建者&#x27;</span>,</span><br><span class="line">  `update_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> <span class="keyword">ON</span> <span class="keyword">UPDATE</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;修改时间&#x27;</span>,</span><br><span class="line">  `update_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;更新者&#x27;</span>,</span><br><span class="line">  `knowledge_id` <span class="type">bigint</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;知识库id&#x27;</span>,</span><br><span class="line">  `knowledge_file_id` <span class="type">bigint</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;知识库文件id&#x27;</span>,</span><br><span class="line">  `slice_text` text <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci COMMENT <span class="string">&#x27;切片数据&#x27;</span>,</span><br><span class="line">  `vector_id` <span class="type">bigint</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;向量数据id&#x27;</span>,</span><br><span class="line">  <span class="keyword">PRIMARY KEY</span> (`id`),</span><br><span class="line">  KEY `idx_knowledge` (`knowledge_id`),</span><br><span class="line">  KEY `idx_knpwledge_file` (`knowledge_file_id`)</span><br><span class="line">) ENGINE<span class="operator">=</span>InnoDB  <span class="keyword">DEFAULT</span> CHARSET<span class="operator">=</span>utf8mb4 <span class="keyword">COLLATE</span><span class="operator">=</span>utf8mb4_0900_ai_ci COMMENT<span class="operator">=</span><span class="string">&#x27;知识库文件切片转向量数据表&#x27;</span>;</span><br></pre></td></tr></table></figure><h3 id="4、knowledge-usage-config-（Knowledge-base-applications）"><a href="#4、knowledge-usage-config-（Knowledge-base-applications）" class="headerlink" title="4、knowledge_usage_config （Knowledge base applications）"></a>4、knowledge_usage_config （Knowledge base applications）</h3><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">CREATE TABLE</span> `knowledge_usage_config` (</span><br><span class="line">  `id` <span class="type">bigint</span> unsigned <span class="keyword">NOT NULL</span> AUTO_INCREMENT COMMENT <span class="string">&#x27;id&#x27;</span>,</span><br><span class="line">  `create_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;创建时间&#x27;</span>,</span><br><span class="line">  `create_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;创建者&#x27;</span>,</span><br><span class="line">  `update_time` datetime <span class="keyword">NOT NULL</span> <span class="keyword">DEFAULT</span> <span class="built_in">CURRENT_TIMESTAMP</span> <span class="keyword">ON</span> <span class="keyword">UPDATE</span> <span class="built_in">CURRENT_TIMESTAMP</span> COMMENT <span class="string">&#x27;修改时间&#x27;</span>,</span><br><span class="line">  `update_by` <span class="type">varchar</span>(<span class="number">64</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="string">&#x27;&#x27;</span> COMMENT <span class="string">&#x27;更新者&#x27;</span>,</span><br><span class="line">  `app_name` <span class="type">varchar</span>(<span class="number">30</span>) <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;应用配置名称&#x27;</span>,</span><br><span class="line">  `app_description` <span class="type">varchar</span>(<span class="number">255</span>) <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;应用配置描述&#x27;</span>,</span><br><span class="line">  `app_icon` <span class="type">varchar</span>(<span class="number">255</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;应用图标&#x27;</span>,</span><br><span class="line">  `prompts_config` text <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci COMMENT <span class="string">&#x27;prompts模板&#x27;</span>,</span><br><span class="line">  `knowledge_id` <span class="type">bigint</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;知识库id&#x27;</span>,</span><br><span class="line">  `top_k` <span class="type">int</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;topK&#x27;</span>,</span><br><span class="line">  `top_p` <span class="keyword">double</span> <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;topP&#x27;</span>,</span><br><span class="line">  `temperature` <span class="type">varchar</span>(<span class="number">5</span>) <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;温度&#x27;</span>,</span><br><span class="line">  `app_code` <span class="type">varchar</span>(<span class="number">100</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;appCode&#x27;</span>,</span><br><span class="line">  `app_secret` <span class="type">varchar</span>(<span class="number">100</span>) <span class="keyword">CHARACTER SET</span> utf8mb4 <span class="keyword">COLLATE</span> utf8mb4_0900_ai_ci <span class="keyword">DEFAULT</span> <span class="keyword">NULL</span> COMMENT <span class="string">&#x27;appSecret&#x27;</span>,</span><br><span class="line">  <span class="keyword">PRIMARY KEY</span> (`id`),</span><br><span class="line">  KEY `idx_app` (`app_code`,`app_secret`)</span><br><span class="line">) ENGINE<span class="operator">=</span>InnoDB  <span class="keyword">DEFAULT</span> CHARSET<span class="operator">=</span>utf8mb4 <span class="keyword">COLLATE</span><span class="operator">=</span>utf8mb4_0900_ai_ci COMMENT<span class="operator">=</span><span class="string">&#x27;知识库应用&#x27;</span>;</span><br></pre></td></tr></table></figure><h2 id="2-Vector-database-table-design"><a href="#2-Vector-database-table-design" class="headerlink" title="2. Vector database table design"></a>2. Vector database table design</h2><p>**Note: A knowledge base corresponds to a vector data table **</p><p>Data Id, data title, data text, data vector eigenvalue</p><h2 id="2-Interaction-flow"><a href="#2-Interaction-flow" class="headerlink" title="2. Interaction flow"></a>2. Interaction flow</h2><p><img src="https://s2.loli.net/2023/11/05/1jLlQEKopi6WmPn.webp"></p><p>online address：<a href="https://link.juejin.cn/?target=https://www.processon.com/diagraming/64bf380800357b03b718c4b3" title="https://www.processon.com/diagraming/64bf380800357b03b718c4b3">www.processon.com/diagraming/...</a></p><h1 id="3-External-calls"><a href="#3-External-calls" class="headerlink" title="3. External calls"></a>3. External calls</h1><p>Refer to the interface entry parameter:</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="punctuation">&#123;</span></span><br><span class="line">    <span class="attr">&quot;textValue&quot;</span><span class="punctuation">:</span> <span class="string">&quot;查询问题&quot;</span><span class="punctuation">,</span></span><br><span class="line">    <span class="attr">&quot;appCode&quot;</span><span class="punctuation">:</span> <span class="string">&quot;应用appCode&quot;</span><span class="punctuation">,</span></span><br><span class="line">    <span class="attr">&quot;appSecret&quot;</span><span class="punctuation">:</span> <span class="string">&quot;应用appSecret&quot;</span></span><br><span class="line"><span class="punctuation">&#125;</span></span><br></pre></td></tr></table></figure><p>Refer to the interface out reference:</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="punctuation">&#123;</span></span><br><span class="line">    <span class="attr">&quot;code&quot;</span><span class="punctuation">:</span><span class="number">200</span><span class="punctuation">,</span></span><br><span class="line">    <span class="attr">&quot;msg&quot;</span><span class="punctuation">:</span><span class="string">&quot;操作成功&quot;</span><span class="punctuation">,</span></span><br><span class="line">    <span class="attr">&quot;data&quot;</span><span class="punctuation">:</span><span class="punctuation">&#123;</span></span><br><span class="line">        <span class="attr">&quot;result&quot;</span><span class="punctuation">:</span><span class="string">&quot;返回的结果&quot;</span><span class="punctuation">,</span></span><br><span class="line">        <span class="attr">&quot;sourceVoList&quot;</span><span class="punctuation">:</span><span class="punctuation">[</span></span><br><span class="line">            <span class="punctuation">&#123;</span></span><br><span class="line">                <span class="attr">&quot;title&quot;</span><span class="punctuation">:</span><span class="string">&quot;来源标题&quot;</span><span class="punctuation">,</span></span><br><span class="line">                <span class="attr">&quot;text&quot;</span><span class="punctuation">:</span><span class="string">&quot;来源内容&quot;</span></span><br><span class="line">            <span class="punctuation">&#125;</span></span><br><span class="line">        <span class="punctuation">]</span></span><br><span class="line">    <span class="punctuation">&#125;</span></span><br><span class="line"><span class="punctuation">&#125;</span></span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">For ChatGPT + vector database how to put the knowledge base on the ground to realize, how to use the vector database to build a privatized knowledge base, and provides examples of MySQL table design and vector database table design. Explained the interaction flow of the knowledge base and the interface for external calls</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="service" scheme="https://www.nablepart.com/tags/service/"/>
    
    <category term="catastrophic crash" scheme="https://www.nablepart.com/tags/catastrophic-crash/"/>
    
    <category term="ChatGPT" scheme="https://www.nablepart.com/tags/ChatGPT/"/>
    
    <category term="good solution" scheme="https://www.nablepart.com/tags/good-solution/"/>
    
    <category term="Database" scheme="https://www.nablepart.com/tags/Database/"/>
    
  </entry>
  
  <entry>
    <title>ChatGPT + vector database to build a privatized knowledge base (I)</title>
    <link href="https://www.nablepart.com/fd007f4370ad/"/>
    <id>https://www.nablepart.com/fd007f4370ad/</id>
    <published>2023-11-04T16:02:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Vector-Database"><a href="#1-Vector-Database" class="headerlink" title="1. Vector Database"></a>1. Vector Database</h2><h2 id="1-vector-database-introduction"><a href="#1-vector-database-introduction" class="headerlink" title="1, vector database introduction"></a>1, vector database introduction</h2><p>When we use image search to search for images or voice search to search for voice, what is stored and compared in the database is not the images and voice clips, but the “features” extracted by deep learning and other algorithms, which are usually 256&#x2F;512 float arrays, and can be represented by vectors in math.</p><p>A vector database is a database used to store, retrieve, and analyze vectors. It is only called a database because it has the following characteristics:</p><p>a) Provide a standard access interface to lower the threshold of the user</p><p>b) Provide efficient data organization, retrieval and analysis capabilities. Users generally need to manage structured data while storing and retrieving vectors, i.e., support the ability of traditional databases to manage structured data.</p><h3 id="2-Advantages-of-Vector-Database"><a href="#2-Advantages-of-Vector-Database" class="headerlink" title="2. Advantages of Vector Database"></a>2. Advantages of Vector Database</h3><p>Give an example:</p><p><strong>Q</strong>: What is the difference between using Embedding and just using full-text search on a database?</p><p><strong>Answer</strong>: Suppose I have a text “Mice are looking for food” in my database. A user enters the query “‘cheese 🧀’”. The text search doesn’t recognize this passage at all; it doesn’t contain any overlap. But with Embedding, both passages are turned into vectors, and then a similarity search can be performed on the passage.</p><p>Since “mouse” and “cheese 🧀” are somehow related, the user is able to get results for the passage despite the lack of matching words.</p><h3 id="3-Problems-solved-by-vector-databases"><a href="#3-Problems-solved-by-vector-databases" class="headerlink" title="3. Problems solved by vector databases"></a>3. Problems solved by vector databases</h3><p>From a technical point of view, vector databases solve 2 main problems, one is efficient retrieval and the other is efficient analysis.</p><ol><li>Retrieval is usually image retrieval, such as face retrieval, human body retrieval, and vehicle retrieval, as well as Taobao’s product image retrieval, face payment.</li></ol><p>(2) urban applications are also more, such as face collision, public security will be 2 similar modus operandi of the crime scene around the portrait to do comparison, to see which people at the same time in the 2 crime scene.</p><h2 id="4-Some-vector-database-products"><a href="#4-Some-vector-database-products" class="headerlink" title="4. Some vector database products"></a>4. Some vector database products</h2><p>Milvus, Pinecone, Vespa, Weaviate, Vald, GSI APU boards for Elasticsearch and OpenSearch, Qdrant</p><p>Details: [7 Vector Database Comparisons: Milvus, Pinecone, Vespa, Weaviate, Vald, GSI and Qdrant](<a href="https://link.juejin.cn/?target=https://www.modb.pro/db%25">https://link.juejin.cn?target=https%3A%2F%2Fwww.modb.pro%2Fdb%</a> 2F516016 “<a href="https://www.modb.pro/db/516016">https://www.modb.pro/db/516016</a>“)</p><h2 id="3-OpenAI-ChatGPT-API-Documentation-of-Embedding"><a href="#3-OpenAI-ChatGPT-API-Documentation-of-Embedding" class="headerlink" title="3. OpenAI ChatGPT API Documentation of Embedding"></a>3. OpenAI ChatGPT API Documentation of Embedding</h2><h2 id="1-Embedding-introduced-by-GPT"><a href="#1-Embedding-introduced-by-GPT" class="headerlink" title="1. Embedding introduced by GPT"></a>1. Embedding introduced by GPT</h2><p>In the field of natural language processing and machine learning, “embeddings” refer to the process of transforming words, phrases, or text into a continuous vector space. This vector space is often called embedding space, and the resulting vectors are called embedding vectors or vector embedding.</p><p>Embedding vectors capture semantic information about words, phrases, or text, allowing them to be compared and computed mathematically. Such comparisons and computations are often used in natural language processing and machine learning for a variety of tasks, such as text categorization, semantic search, and word similarity computation.</p><p>In Chinese context, “embeddings” is often translated as “word vectors” or “vector representations”. These translations emphasize the characteristics of embedding vectors, i.e., words are converted into vectors and represented as points in the embedding space.</p><h3 id="2-What-is-Embedding"><a href="#2-What-is-Embedding" class="headerlink" title="2. What is Embedding"></a>2. What is Embedding</h3><p>Embedding is a vector (list) of floating point numbers. The distance between two vectors is used to measure the correlation between them. A smaller distance indicates high correlation and a larger distance indicates low correlation.</p><h3 id="3-How-to-use-GPT-API"><a href="#3-How-to-use-GPT-API" class="headerlink" title="3. How to use GPT API?"></a>3. How to use GPT API?</h3><p>Example request:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">curl https://api.openai.com/v1/embeddings \</span><br><span class="line">  -H &quot;Content-Type: application/json&quot; \</span><br><span class="line">  -H &quot;Authorization: Bearer $OPENAI_API_KEY&quot; \</span><br><span class="line">  -d &#x27;&#123;&quot;input&quot;: &quot;Your text string goes here&quot;,</span><br><span class="line">       &quot;model&quot;:&quot;text-embedding-ada-002&quot;&#125;&#x27;</span><br></pre></td></tr></table></figure><p>Example Response:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">&#123;</span><br><span class="line">  &quot;data&quot;: [</span><br><span class="line">    &#123;</span><br><span class="line">      &quot;embedding&quot;: [</span><br><span class="line">        -0.006929283495992422,</span><br><span class="line">        -0.005336422007530928,</span><br><span class="line">        ...</span><br><span class="line"></span><br><span class="line">        -4.547132266452536e-05,</span><br><span class="line">        -0.024047505110502243</span><br><span class="line">      ],</span><br><span class="line">      &quot;index&quot;: 0,</span><br><span class="line">      &quot;object&quot;: &quot;embedding&quot;</span><br><span class="line">    &#125;</span><br><span class="line">  ],</span><br><span class="line">  &quot;model&quot;: &quot;text-embedding-ada-002&quot;,</span><br><span class="line">  &quot;object&quot;: &quot;list&quot;,</span><br><span class="line">  &quot;usage&quot;: &#123;</span><br><span class="line">    &quot;prompt_tokens&quot;: 5,</span><br><span class="line">    &quot;total_tokens&quot;: 5</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><strong>Note: GPT will generate an embedding array of 1536 dimensions (array length is 1536)</strong> based on the string</p><h2 id="3-Milvus-vector-database"><a href="#3-Milvus-vector-database" class="headerlink" title="3. Milvus vector database"></a>3. Milvus vector database</h2><h3 id="1-Introduction"><a href="#1-Introduction" class="headerlink" title="1. Introduction"></a>1. Introduction</h3><p>Milvus is a world-leading open source vector database that empowers AI applications and vector similarity search to accelerate unstructured data retrieval. Users get a consistent user experience in any deployment environment.</p><p>Milvus 2.0 is a cloud-native vector database designed with an architecture that separates storage from compute. All components in this refactored version are stateless, greatly enhancing system elasticity and flexibility.</p><h3 id="2-Milvus-Related-Documents"><a href="#2-Milvus-Related-Documents" class="headerlink" title="2. Milvus Related Documents"></a>2. Milvus Related Documents</h3><ul><li>Zilliz Chinese Technical Zone：<a href="https://link.juejin.cn/?target=https://zilliz.gitee.io/welcome/" title="https://zilliz.gitee.io/welcome/">zilliz.gitee.io&#x2F;welcome&#x2F;</a><ul><li></li></ul><ul><li>Technical Video Collection：<a href="https://link.juejin.cn/?target=https://space.bilibili.com/1058892339Milvus" title="https://space.bilibili.com/1058892339Milvus">space.bilibili.com&#x2F;1058892339M…</a></li><li>GitHub：<a href="https://link.juejin.cn/?target=https://github.com/milvus-io/milvus" title="https://github.com/milvus-io/milvus">github.com&#x2F;milvus-io&#x2F;m…</a></li><li>Docs：<a href="https://link.juejin.cn/?target=https://milvus.io/docs" title="https://milvus.io/docs">milvus.io&#x2F;docs</a></li><li>Official FAQ：<a href="https://link.juejin.cn/?target=https://milvus.io/docs/product_faq.md" title="https://milvus.io/docs/product_faq.md">milvus.io&#x2F;docs&#x2F;produc…</a></li><li>Slack：<a href="https://link.juejin.cn/?target=https://milvusio.slack.com/join/shared_invite/zt-1oki7bq78-5eWQ_QJjMStcdyKQxQUqDg%23/shared-invite/email" title="https://milvusio.slack.com/join/shared_invite/zt-1oki7bq78-5eWQ_QJjMStcdyKQxQUqDg#/shared-invite/email">milvusio.slack.com&#x2F;join&#x2F;shared…</a></li><li>Towhee<ul><li>GitHub：<a href="https://link.juejin.cn/?target=https://github.com/towhee-io/towhee" title="https://github.com/towhee-io/towhee">github.com&#x2F;towhee-io&#x2F;t…</a></li><li>Docs：<a href="https://link.juejin.cn/?target=https://docs.towhee.io/" title="https://docs.towhee.io/">docs.towhee.io&#x2F;</a></li></ul></li></ul></li><li>Online hosted version：<a href="https://link.juejin.cn/?target=https://cloud.zilliz.com/" title="https://cloud.zilliz.com/">Zilliz Cloud</a></li></ul><h2 id="4-GPT-Milvus-to-build-a-privatized-knowledge-base"><a href="#4-GPT-Milvus-to-build-a-privatized-knowledge-base" class="headerlink" title="4. GPT+Milvus to build a privatized knowledge base"></a>4. GPT+Milvus to build a privatized knowledge base</h2><h2 id="1-Flowchart"><a href="#1-Flowchart" class="headerlink" title="1. Flowchart"></a>1. Flowchart</h2><p><img src="https://s2.loli.net/2023/11/05/D9M8jlmRZOrh7FX.webp"></p><h3 id="2-Detailed-process-realization"><a href="#2-Detailed-process-realization" class="headerlink" title="2. Detailed process realization"></a>2. Detailed process realization</h3><p>1, create a vector database collection (equivalent to database tables)</p><p>2, create the index of the collection</p><p>4, import the data call openAi convert to vector floating point data, the data text and vector floating point data stored in the collection</p><p>5, load the collection into memory for querying</p><p>6、User query call openAi to convert to vector floating point data, query the vector database to get the data text according to the vector floating point data.</p><p>7, take the user’s question and the data text from the vector database, write a prompt, hand it over to GPT for touching up, and generate the answer.</p><blockquote><p>milvus-java：<a href="https://gitee.com/lgySpace/milvus-java">gitee.com&#x2F;lgySpace&#x2F;mi…</a></p></blockquote>]]></content>
    
    
    <summary type="html">Using GPT and the Milvus Vector Database to build a privatized knowledge base, the Vector Database provides a good solution to keep up with the development of large prophecy models.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="development" scheme="https://www.nablepart.com/tags/development/"/>
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="service" scheme="https://www.nablepart.com/tags/service/"/>
    
    <category term="catastrophic crash" scheme="https://www.nablepart.com/tags/catastrophic-crash/"/>
    
    <category term="ChatGPT" scheme="https://www.nablepart.com/tags/ChatGPT/"/>
    
    <category term="good solution" scheme="https://www.nablepart.com/tags/good-solution/"/>
    
    <category term="Database" scheme="https://www.nablepart.com/tags/Database/"/>
    
  </entry>
  
  <entry>
    <title>Are you also looking for alternatives after the Whispering Sparrow downtime</title>
    <link href="https://www.nablepart.com/245fe3041d17/"/>
    <id>https://www.nablepart.com/245fe3041d17/</id>
    <published>2023-11-04T16:01:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="I-The-Whispering-Sparrow-Incident"><a href="#I-The-Whispering-Sparrow-Incident" class="headerlink" title="I. The Whispering Sparrow Incident"></a>I. The Whispering Sparrow Incident</h2><p> On October 23, 2023, Whispering Sparrow suffered a catastrophic crash that resulted in an 8-hour service outage. This was a huge blow to users who relied on Whispering Sparrow, and their work or life was severely affected. I had been using it at the time and thought it was a network problem, restarted it and it was still the same.</p><blockquote><p> “Some users said that the company’s product manager drank tea for an afternoon because he couldn’t open Whisper Sparrow…”</p></blockquote><p><img src="https://s2.loli.net/2023/11/04/SITbfog6lN5KYR1.webp" alt="img"></p><p>Afterwards, the official announcement of the glitch and the compensation of 6 months free membership to all users were given.</p><p><img src="https://s2.loli.net/2023/11/04/gl5IK1kzJXqATi4.webp" alt="img"></p><p>Language Sparrow has been used internally in the Ali system since its launch, and it has also spread rapidly in the technical circle. According to statistics Language Sparrow was launched for Ali’s production and research teams from 2016 and officially opened to the public in 2018. Language Sparrow currently has about 12 million users, serving more than 8,000 active team organizations and more than 1,200 public education organizations. Such an accident makes us think, how should we choose a suitable cloud note product? What cloud notebooks on the market are safer and more reliable? Are there any cloud note products that focus on local backup?</p><h2 id="Second-cloud-note-taking-products"><a href="#Second-cloud-note-taking-products" class="headerlink" title="Second, cloud note-taking products"></a>Second, cloud note-taking products</h2><p><img src="https://s2.loli.net/2023/11/05/jns3icUt69Ialmz.webp" alt="image.png"></p><p>How to choose from so many products?</p><h3 id="2-1-Choose-a-product-that-focuses-on-local-retention-of-original-documents"><a href="#2-1-Choose-a-product-that-focuses-on-local-retention-of-original-documents" class="headerlink" title="2.1. Choose a product that focuses on local retention of original documents"></a>2.1. Choose a product that focuses on local retention of original documents</h3><p>You can see that after the incident with Language Sparrow, the articles on the Internet have a high appearance rate, including Obsidian and SiYuan Notes. Why? Because they pay more attention to local retention.</p><p><img src="https://s2.loli.net/2023/11/04/ubx4lihfeyJ8ScG.webp" alt="img"></p><h3 id="1）Obsidian"><a href="#1）Obsidian" class="headerlink" title="1）Obsidian"></a>1）Obsidian</h3><p><img src="https://s2.loli.net/2023/11/04/ri5mA2KX6H3nyao.webp" alt="img"></p><p><img src="https://s2.loli.net/2023/11/04/WgobxacD6EXuVsq.webp" alt="img"></p><p>Obsidian is a knowledge management and note-taking application known for its emphasis on relevance and connectivity for individuals and teams, and is especially suited for users who want to better organize and discover their knowledge. Below is a product description of Obsidian:</p><p>Key Features:</p><ul><li>Bi-directional links: one of the key features of Obsidian is bi-directional links. It allows you to create bi-directional links between your notes, thus building complex knowledge networks. You can easily discover and explore correlations between different notes, which helps to understand the topic more deeply.</li><li>Markdown support: Obsidian uses the Markdown markup language, which allows you to write notes in plain text format while supporting a variety of text editing and formatting options.The use of Markdown makes notes easier to read and more portable.</li><li>Local file storage:Obsidian’s files are stored on your local computer, not on a cloud server. This helps improve privacy and security while giving you greater control, but also requires backing up on your own.</li><li>Plugin System: Obsidian allows users to extend functionality through a plugin system. This means you can add customized features such as calendars, timeline views, charts, etc. as needed.</li><li>Themes and Styles: You can choose different themes and styles to beautify the application interface according to your personal preferences in order to make it more in line with your taste.</li><li>Search and Filtering: Obsidian has powerful search and filtering features to help users quickly find the notes they need. You can search for keywords, tags, file names, etc.</li><li>Cross-platform : Obsidian supports multiple operating systems, including Windows, macOS and Linux, which means you can synchronize and access your notes on different devices. Advantage:</li><li>Obsidian emphasizes correlation and mind mapping, which helps to better understand and organize knowledge.</li><li>Local storage adds privacy and security while allowing offline access.</li><li>Markdown support makes text editing simple and easy to format.</li><li>Plugin system provides powerful customization. Shortcomings:</li><li>Learning curve: for new users, Obsidian’s features may take some time to familiarize and master.</li><li>Lack of team collaboration: Although Obsidian is suitable for personal knowledge management, it is not as suitable for collaborative projects as some team collaboration tools.</li><li>Local storage needs to be backed up on its own, which is not suitable for scenarios that require cloud synchronization and team collaboration. Need to achieve cloud synchronization through plug-ins.</li><li>Obsidian is a powerful tool for users who want to manage and organize their knowledge in a new way, especially those who like to explore relevance and connectivity.</li></ul><ol start="2"><li>Obsidian Notes</li></ol><p><img src="https://s2.loli.net/2023/11/04/SGYFWdJhePcyITi.webp" alt="img"></p><p>Siyuan Note (Siyuan Note) is a Markdown-based cross-platform knowledge management tool with powerful organization and query features for individuals and teams. The following is a product description of Siyuan Note:</p><p>Key Features:</p><ul><li>Markdown support: SiYuan Note uses Markdown markup language to enable users to write notes in plain text format and supports various text editing and formatting options of Markdown.</li><li>Relationship mapping: SiYuan Notes provides a relationship mapping view that allows users to visualize the associative relationships between notes. This helps to better understand and explore the connections between knowledge.</li><li>Quick referencing: Users can build knowledge networks by quickly referencing the content of other notes between different notes. Such referencing relationships are presented as links in the text.</li><li>Cross-platform: SiYuan Notes supports multiple operating systems, including Windows, macOS, and Linux, which means users can synchronize and access their notes on different devices.</li><li>Local File Storage: Similar to Obsidian, SiYuan Notes stores files on a local computer for added privacy and security, but requires users to make their own backups.</li><li>Tags and Folders: Users can use tags and folders to organize and categorize notes, making them easier to find and manage.</li><li>Full text search: SiYuan Notes has a powerful full text search feature that allows users to quickly find the notes and information they need.</li><li>Export and Import: Users can export notes as Markdown files, while importing Markdown files is also supported, making it more portable. Advantage:</li><li>Emphasizes the relationship between knowledge to help users better understand and organize knowledge.</li><li>Supports Markdown, simplifying text editing and formatting.</li><li>Provides cross-platform support, allowing users to synchronize and access notes on different devices.</li><li>Relationship mapping view provides visualization of knowledge structure. Shortcomings:</li><li>Learning curve: for new users, the features of SiYuan Notes may take some time to familiarize and master.</li><li>Lack of team collaboration: Civic Notes is suitable for personal knowledge management, but not as suitable for collaborative projects as some team collaboration tools.</li><li>The local storage needs to be backed up by itself, which is not suitable for scenarios that require cloud synchronization and team collaboration.</li></ul><h3 id="2-2-choose-other-cloud-products"><a href="#2-2-choose-other-cloud-products" class="headerlink" title="2.2, choose other cloud products"></a>2.2, choose other cloud products</h3><p>For example, choose Impression or Tencent Document, etc. ### 2.3.</p><h3 id="2-3-Choose-an-editor-product"><a href="#2-3-Choose-an-editor-product" class="headerlink" title="2.3 Choose an editor product"></a>2.3 Choose an editor product</h3><p><img src="https://s2.loli.net/2023/11/04/YfCQ1Fc97zmSJPG.webp" alt="img"></p><p>There are also quite a few tech bros who have chosen to use the most primitive editor + netbook or github approach to replace the cloud notes product. To replace cloud notes through the editor is generally the following steps:</p><ul><li>Local editor to edit documents</li><li>Configure the map bed through the plug-in</li><li>Synchronization through the network disk</li></ul><p>There are still a lot of brothers who love to toss 。。。。</p><p><img src="https://s2.loli.net/2023/11/04/GLtScNq7RlPhY4a.webp" alt="img"></p><p><img src="https://s2.loli.net/2023/11/04/2gVuzHsxhptkJFZ.webp" alt="img"></p><p>What would you choose to do?</p>]]></content>
    
    
    <summary type="html">On October 23, 2023, Whispering Sparrow suffered a catastrophic crash that resulted in an 8-hour service outage. This was a huge blow to users who relied on Whispering Sparrow, and their work or life was severely affected. I had been using it at the time and thought it was a network problem, restarted it and it was still the same.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="Backend Technology Sharing" scheme="https://www.nablepart.com/tags/Backend-Technology-Sharing/"/>
    
    <category term="network" scheme="https://www.nablepart.com/tags/network/"/>
    
    <category term="Whispering" scheme="https://www.nablepart.com/tags/Whispering/"/>
    
    <category term="Oscillators" scheme="https://www.nablepart.com/tags/Oscillators/"/>
    
    <category term="service" scheme="https://www.nablepart.com/tags/service/"/>
    
    <category term="work" scheme="https://www.nablepart.com/tags/work/"/>
    
    <category term="catastrophic crash" scheme="https://www.nablepart.com/tags/catastrophic-crash/"/>
    
  </entry>
  
  <entry>
    <title>ESK investment philosophy  a judgment system that prioritizes &quot;knowledge value&quot;.</title>
    <link href="https://www.nablepart.com/39763d7c55f6/"/>
    <id>https://www.nablepart.com/39763d7c55f6/</id>
    <published>2023-11-04T09:06:28.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>On November 3rd, Mi Lei, the founding partner of Zhongke Chuangxing, released the “ESK Value Investment Responsibility Report on Hard Technology” at the Photon Industry Development and Hard Technology Achievement<br>Transformation Forum of the 2023 Global Hard Technology Innovation Conference.</p><p>In the report, Mi Lei pointed out, “In the golden age of hard technology, we need a new innovative investment concept to better promote technological innovation, economic growth, and social progress under the theme of ‘Chinese-style modernization’.” This new concept system is called the ESK (Economic, Social, Knowledge) value<br>investment system.</p><h2 id="Different-from-ESG"><a href="#Different-from-ESG" class="headerlink" title="Different from ESG"></a>Different from ESG</h2><p>Throughout the century-long changes in the venture capital industry, some simple and concise concepts have led to the progress and prosperity of the industry: from the 1930s to the beginning of the 21st century, value<br>investing represented by Graham and Buffett, with “return on value” as the investment core; from the beginning of the 21st century to the present, ESG (environment, social, and corporate governance) investment with<br>“focus on environment, society, and corporate governance” as the core. </p><p>Value investing and ESG investment both attempt to answer the questions posed by the times - value investing solves how to bring returns to capital, while ESG investment tells the world how to achieve “both righteousness<br>and profit”. However, they both have some shortcomings when facing the surging wave of hard technology. </p><p>According to the report, “In the golden age of hard technology, in addition to focusing on the return on capital market (economic value) and promoting environmental sustainability, rural revitalization, and common<br>prosperity (social value), investment should also focus on the creation of knowledge value.” </p><p>As the 3.0 version of the value investing system, ESK retains the core principle of value investing’s focus on economic returns, while also covering the 2.0 requirements of ESG’s focus on environmental, social, and<br>corporate governance. At the same time, ESK value investing also focuses on the original driving force of human civilization progress - knowledge value. </p><p>The report points out that the three major systems composed of “knowledge system, economic system, and social system” are the underlying laws that maintain the benign operation and coordinated development of human<br>economy and society. It is precisely based on the disruptive innovation of the three major systems that the three major values (knowledge value, economic value, and social value) naturally arise. Among them, economic<br>value brings wealth to humanity, social value brings happiness and power to humanity, and knowledge value is the source of both values. Focusing on knowledge innovation and creating economic returns and social wealth is<br>the true intention of investment.</p><h2 id="ESK-The-Extension-and-Innovation-of-Investment-Philosophy-in-Hard-Technology"><a href="#ESK-The-Extension-and-Innovation-of-Investment-Philosophy-in-Hard-Technology" class="headerlink" title="ESK: The Extension and Innovation of Investment Philosophy in Hard Technology."></a>ESK: The Extension and Innovation of Investment Philosophy in Hard Technology.</h2><p>The establishment of the ESK system is based on the investment experience and thinking of Dr. Mi Lei, founding partner of Zhongke Chuangxing. In 2010, against the background of society’s relative “indifference”<br>to core technology, Dr. Mi Lei proposed the concept of “hard technology”. He emphasized that “if we only focus on the innovation of business models, and do not pay attention to the innovation of underlying technology,<br>this is not conducive to China’s economic and social progress. Especially when China is at a turning point in history, the Chinese economy will shift from factor-driven and investment-driven to innovation-driven,<br>and hard technology is the key to support China’s new era of economic development, and to move from following to leading.” Dr. Mi Lei believes that “hard technology” is not only “hard” in terms of high technical<br>barriers and originality, but also “hard” in the value it creates - hard technology investment can not only bring considerable economic returns to the capital market, but also create social value, such as promoting<br>sustainable development of the environment, promoting rural revitalization, and helping achieve common prosperity. In addition, hard technology is also the crystallization of human wisdom and knowledge, and the<br>achievement of civilization. He believes that the global economy is declining because the previous knowledge systems have been digested, and only by creating a new round of knowledge, economic and social value can we<br>better create a better future. Looking at human history, the values ​​of knowledge, economy, and society are closely linked. The three are like the three sides of a triangle. Under their close connection, they form the<br>most stable structure, and ultimately promote the agricultural revolution, industrial revolution, information revolution, and even the ongoing intelligence revolution.<br>As the “Report” states, knowledge is the foundation of all technological and industrial revolutions, which are based on relevant theoretical knowledge systems and controllable experimental technologies. Knowledge is<br>also the source of economic and social wealth. Standing in the current boom of hard technology, the proposal of the ESK value system is necessary.</p><h2 id="ESK-is-more-than-just-an-investment"><a href="#ESK-is-more-than-just-an-investment" class="headerlink" title="ESK is more than just an investment."></a>ESK is more than just an investment.</h2><p>Under the guidance of the ESK value investment system, Zhongke Chuangxing has always adhered to the goals of focusing on major national and social needs, focusing on creating a better living environment for the people,<br>and focusing on talent development and knowledge progress. During the process of fundraising and management, we have directly or indirectly referred to and implemented the ESK value investment system, striving for both<br>quality and quantity in development.</p><p>The report shows that in the five major areas of information, energy, materials, life sciences, and materials, Zhongke Chuangxing has actively explored and patiently accompanied the development of over 420 hard-tech<br>companies, many of which have contributed to promoting technological self-reliance, green and sustainable development, and improving people’s quality of life.</p><p>In the process of exploring and summarizing the ESK value investment system, Zhongke Chuangxing has gradually developed its own value investment philosophy as a Chinese venture capital institution. For hard-tech startups,<br>they often need to purchase expensive production and testing equipment in the early stages of entrepreneurship. However, the huge investment may not necessarily bring high returns, but could also entail high risks. For<br>example, in the case of photon chips, an early-stage photon chip company needs to invest at least 50 million yuan in equipment for testing and packaging to provide samples for customers.</p><p>In order to better support the development of photonics industry companies, Zhongke Chuangxing has collaborated to build an integrated entrepreneurial ecosystem called the “Curvature Engine” hard-tech enterprise<br>community. This hard-tech enterprise technology space, located in Xi’an, includes “four parks and one platform”. Among them, the photon chip park and the Shaanxi Optoelectronic Pioneer Institute, which is the only<br>institution in China with IMEC process package, serve as professional shared technology platforms, providing advanced production environments and equipment such as shared production lines and clean workshops for photon<br>chip companies. This significantly shortens the product delivery cycle for enterprises and effectively reduces the threshold for entrepreneurial companies.<br>In addition to funding, hard-tech startups also face other transformational issues. To achieve the goal of being a champion in hard-tech, Zhongguancun Star initiated the Hard-tech Enterprise Entrepreneurship Camp in 2015,<br>which is the earliest entrepreneurship training camp in China that focuses on the hard-tech field. This year, Zhongguancun Star, in collaboration with the China Entrepreneur Exploration Program, led scientists and<br>entrepreneurs to visit leading companies in the new trillion-dollar industry and explore the core secrets of achieving super growth for future industrial companies in the next round of transformation.</p><p>In terms of knowledge value, Zhongguancun Star is also actively exploring. In 2020, Zhongguancun Star and the Torch Center of the Ministry of Science and Technology jointly researched and formed the “Hard-tech Technology<br>Catalog” and released the “Hard-tech Tree”. A year later, Zhongguancun Star participated in the writing of China’s first hard-tech-themed book, “Hard-tech: The Strategic Support for China’s Technological Self-reliance”,<br>and “Hard-tech: The Frontier of Great Power Competition”, comprehensively expounding the connotation and importance of hard-tech, pushing the important value of the “hard-tech” concept to a new level. In addition,<br>Zhongguancun Star has invested in and incubated more than 200 scientific entrepreneurial teams, not only promoting the transformation of scientific and technological achievements into productivity (the conversion of<br>knowledge value into economic value), but also helping these cutting-edge technology research and development to create more knowledge value.</p><p>Finally, at the press conference, Dr. Mi Lei stated that the “Hard-tech ESK Value Investment Responsibility Report” not only showcases Zhongguancun Star’s investment map, but also hopes that it can become a “compass”<br>for China’s venture capital industry, a “localized value investment path” for China’s hard-tech venture capital. He hopes that the thinking of Zhongguancun Star can inspire colleagues in venture capital, enabling<br>investment to promote knowledge innovation and create even more abundant wealth for the world and human civilization.</p>]]></content>
    
    
    <summary type="html">ESK value investment emphasizes focusing on the original driving force of human civilization progress - &quot;knowledge value&quot;.</summary>
    
    
    
    <category term="Finance" scheme="https://www.nablepart.com/categories/Finance/"/>
    
    
    <category term="Investment philosophy" scheme="https://www.nablepart.com/tags/Investment-philosophy/"/>
    
    <category term="US market" scheme="https://www.nablepart.com/tags/US-market/"/>
    
    <category term="investment" scheme="https://www.nablepart.com/tags/investment/"/>
    
    <category term="Hard technology" scheme="https://www.nablepart.com/tags/Hard-technology/"/>
    
    <category term="ESG investment" scheme="https://www.nablepart.com/tags/ESG-investment/"/>
    
  </entry>
  
  <entry>
    <title>Underperformance of tech giants leads to sell-off in US stocks, which need to avoid hitting a new low in the next three months.</title>
    <link href="https://www.nablepart.com/b28eb7abacbb/"/>
    <id>https://www.nablepart.com/b28eb7abacbb/</id>
    <published>2023-11-03T16:43:20.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>The Nasdaq fell 1.8% overnight, with no large tech stocks spared. This week, Google and Meta announced better-than-expected third-quarter financial results, but market concerns about the future prospects of both companies<br>diluted the positive financial numbers, with Meta falling nearly 4% after the financial report; Amazon’s third-quarter revenue and profits exceeded expectations, with its cloud business (AWS) revenue increasing slightly<br>year-on-year, but far less than its competitors Microsoft and Google, and its revenue guidance for the fourth quarter was also lower than expected, causing its stock price to fall during the financial report conference call.<br>On the same day, the strong growth in US GDP was announced, with a quarterly growth rate of 4.9% (expected to be 4.5%), the fastest since the peak of the 2021 recovery. This is not good news for the stock market, as it means<br>that the hawkish stance of the Federal Reserve may be maintained for a longer period of time. Although we still see strong seasonal prospects for the US stock market from October to December, uncertainty may make the process<br>of stock index recovery more turbulent.</p><h2 id="Tech-giants’-fourth-quarter-guidance-unsettles-the-market-and-on-Thursday-the-three-major-US-stock-indices-hit-a-four-month-low-simultaneously"><a href="#Tech-giants’-fourth-quarter-guidance-unsettles-the-market-and-on-Thursday-the-three-major-US-stock-indices-hit-a-four-month-low-simultaneously" class="headerlink" title="Tech giants’ fourth-quarter guidance unsettles the market, and on Thursday, the three major US stock indices hit a four-month low simultaneously."></a>Tech giants’ fourth-quarter guidance unsettles the market, and on Thursday, the three major US stock indices hit a four-month low simultaneously.</h2><p>Earlier, we mentioned that investors need to pay attention to whether the market is at a turning point, as higher bond yields and steeper yield curves could hit overvalued growth stocks. The Magnificent Seven (Mag7),<br>consisting of Amazon, Alphabet, Apple, Nvidia, Meta, Microsoft, and Tesla, have become synonymous with this bull market, accounting for a quarter of the market value of the S&amp;P 500 index.</p><p>Since the beginning of this year, the Nasdaq 100 index and the S&amp;P 500 index have rebounded by nearly 50% and 30%, respectively, with several large-cap tech stocks contributing to most of the index gains. However, except<br>for Nvidia, the revenue of the other Mag7 companies has been difficult to keep up with or only slightly exceeded nominal economic growth. And now, their valuation premium is higher than during the first rebound from the<br>COVID-19 lockdowns. There are signs that Mag7’s dominant position in the market is declining, averaging an 11% drop from its summer peak. Therefore, although the third-quarter earnings of tech giants were not bad, the<br>market’s tolerance is decreasing. In other words, once performance falls slightly short of expectations, the market’s punishment on stock prices will be far greater than before.</p><p>On the 26th, the three major US stock indexes hit a four-month low together, with the S&amp;P 500 index falling below its 200-day moving average to around 4,137.23 points, and the Nasdaq 100 index falling 189 points to 14,109<br>points, down 11% from its highest point earlier this year despite rebounding nearly 50% at one point. This is mainly due to the weak performance of some large-cap tech stocks as investors digest disappointing earnings data.</p><p>The day before, the three major indexes had already collectively fallen, with the Nasdaq falling more than 2%, and Alphabet, Google’s parent company, plummeting nearly 10% due to lower-than-expected revenue from its cloud<br>business, marking its largest single-day decline since the pandemic. In sharp contrast, Microsoft’s intelligent cloud business continued to maintain its industry-leading position, with its stock price rising by 3%. However,<br>Microsoft alone cannot change the pessimistic market sentiment.</p><p>Specifically, Alphabet, which just experienced its biggest stock price drop since March 2020, had an EPS of $1.55, a YoY increase of 46%; revenue of $7.66 billion, a YoY increase of 11%; and operating profit of $2.13 billion,<br>a YoY increase of 24%.</p><p>The revenue of different departments is as follows: The total revenue of Google Services is $67.9 billion, a year-on-year increase of 10.7%. Among them, Google Advertising accounted for $59.6 billion, a year-on-year<br>increase of 9.4%. Google Search and other segments accounted for $44 billion, a year-on-year increase of 11%. YouTube Advertising accounted for $7.9 billion, a year-on-year increase of 12.4%. However, the reason for the<br>sharp drop in stock prices this time is that the revenue of Google Cloud department is $8.4 billion, a year-on-year increase of 22.4% (compared to a year-on-year increase of 28% in the previous quarter).</p><p>Overall, Alphabet’s advertising revenue grew by 9.5%, and with the boost from cloud business, Alphabet’s total revenue reached $76.6 billion, a year-on-year increase of 11%, which is still lower compared to Microsoft’s<br>40%+ or Apple’s 30%+. However, compared to other industries, this is an extremely advantageous number (Tesla only has 14%).</p><p>Alphabet still holds over 90% of the global search engine market share, despite Microsoft’s introduction of AI search. Alphabet has not yet lost its leading position and is still striving to catch up in the field of AI.<br>Therefore, one of the reasons for this decline is that the previous months’ rise led to an overvaluation.</p><p>Following the release of its financial report, Meta also experienced a sharp drop in stock prices. Meta’s third-quarter revenue increased by 23% compared to the same period last year, reaching $34.15 billion, higher than<br>the expected $33.51 billion. Diluted EPS soared from $1.64 in the previous year to $4.39, far exceeding analysts’ expected $3.60. The advertising prices on its platform decreased by 6% this quarter. Although it is negative growth, compared to the sharp double-digit percentage decline in the past 18 months, this is a significant improvement. Moreover, this decline is lower than analysts’ expected 8.9%, and analysts now have more confidence that advertising prices will recover in the future. The problem is that despite the signs of bottoming out in advertising prices, this positive aspect is overshadowed by management’s warning of “uncertainty” in revenue prospects for 2024.</p><p>Currently, Meta tightly controls costs but continues to invest in new areas such as AI, augmented reality&#x2F;virtual reality headsets, and the metaverse, hoping that AI will change its business. However, management says it<br>is too early to discuss profitability now. As investors focus more on prospects rather than results, Meta’s stock price fell by more than 3%, which also dragged down the Nasdaq 100 index to a four-month low.</p><p>Coincidentally, Amazon, which also performed well but saw its stock price decline due to fourth-quarter guidance, reported third-quarter revenue of $143.1 billion, a year-on-year increase of 13%, higher than the market’s<br>expected year-on-year increase of 11% to $141.4 billion, and a quarter-on-quarter increase of 6.5% compared to the second quarter. However, Amazon’s fourth-quarter guidance was not satisfactory. The company believes that<br>net sales in the fourth quarter will be in the range of $160 billion to $167 billion, with an expected midpoint of $163.5 billion, which is an increase of less than 10% year-on-year and not far from the historically<br>lowest growth rate, and lower than the market’s expected $166.6 billion.</p><h2 id="Strong-economic-data-and-US-bond-yields-continue-to-weigh-on-risk-assets"><a href="#Strong-economic-data-and-US-bond-yields-continue-to-weigh-on-risk-assets" class="headerlink" title="Strong economic data and US bond yields continue to weigh on risk assets."></a>Strong economic data and US bond yields continue to weigh on risk assets.</h2><p>Another factor putting pressure on the market is undoubtedly the Federal Reserve, and the accompanying continuous rise in US bond yields has also hit risk assets. The US GDP for the third quarter increased at an<br>annualized rate of 4.9%, far exceeding the previous value of 2.1% and the expected 4.3%. This is mainly due to the push from consumption, inventory, and government spending. In addition, durable goods orders rose 4.7%<br>month-on-month in September, exceeding the expected 1.7%.</p><p>The resilience of US consumers is surprising. The data shows that consumption contributed the most to growth, with a 4% increase, of which goods increased by 4.8% and services by 3.6%. This makes it difficult for the<br>Federal Reserve to abandon its hawkish stance at next week’s meeting. Currently, it is expected that even if the Federal Reserve announces no change in November, it cannot rule out the possibility of raising interest<br>rates again in December, and the timing of rate cuts may be postponed until the third or fourth quarter of 2024, with the magnitude of rate cuts greatly reduced to two times.</p><p>Last week, initial jobless claims in the US hit a nine-month low of 198,000, and the labor market remains strong. After the data was released, Powell mentioned in a public speech that he would act cautiously (echoing<br>expectations of no rate hike in November) while continuing to suggest the possibility of future rate hikes (due to recent hot economic data).</p><p>At the same time, the recent rebound in oil prices may also affect future US inflation, thereby affecting the Federal Reserve’s interest rate hike process and impacting stock indices. On the 26th, WTI crude oil fell<br>back to near the key support level of 83.40 after giving up all its gains from the 25th. US crude oil inventories increased by 1.37 million barrels last week, exceeding the expected 239,000 barrels. However, as<br>geopolitical tensions have not yet subsided and the peak demand for winter heating approaches, oil prices may still remain high.</p><p>Technically speaking, oil prices rebounded for the third time in October from around $82.30, which is the 38.2% retracement level of the May-September uptrend. The strong economic growth in the United States has boosted<br>market confidence, so bears have not yet truly taken control. However, the downward trend line and weekly chart pattern since September also suggest that the current rebound in oil prices may be relatively limited.<br>Bulls will focus on the former high of $85.50 and the $88 level. Currently, major Wall Street banks still predict that the possibility of oil prices approaching $100 cannot be ruled out.<br>It is worth mentioning that last week the yield on the 10-year US Treasury bond briefly reached 5%, before hovering around 4.9%, reaching a new high since 2007. However, the shorter-term 2-year yield fell back below<br>5.2% from its peak,<br>with the yield spread between the two approaching zero. The recent easing of the degree of yield curve inversion seems to imply a significant decrease in the probability of an economic recession in the US.<br>The rise in long-term yields has a tangible impact on the economy, as investors can obtain high yields by purchasing US bonds, which also affects the attractiveness of other risk assets, leading to pressure on assets<br>such as the stock market.</p><h2 id="Breaking-below-the-200-day-moving-average-US-stocks-need-to-avoid-hitting-new-lows-in-the-next-three-months"><a href="#Breaking-below-the-200-day-moving-average-US-stocks-need-to-avoid-hitting-new-lows-in-the-next-three-months" class="headerlink" title="Breaking below the 200-day moving average, US stocks need to avoid hitting new lows in the next three months."></a>Breaking below the 200-day moving average, US stocks need to avoid hitting new lows in the next three months.</h2><p>From a technical perspective, the Nasdaq 100 index experienced its most significant single-day decline since 2023 on the 26th, causing it to fall to its lowest point in four months and break below the bottom of the<br>support zone that has been in place since June. The index managed to avoid setting new lows for nearly three months, but now it is showing that the downward trend that started in July is still continuing. Assuming the<br>downward trend continues, we may not see the next support level until 14,200 points. For the bulls, the current goal is for the index to return to the support zone above 14,400 points, while achieving new highs would<br>require a breakthrough above 14,750 points.</p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;The Nasdaq fell 1.8% overnight, with no large tech stocks spared. This week, Google and Meta announced better-than-expected third-quarter</summary>
      
    
    
    
    <category term="Securities" scheme="https://www.nablepart.com/categories/Securities/"/>
    
    
    <category term="Stock market" scheme="https://www.nablepart.com/tags/Stock-market/"/>
    
    <category term="Securities" scheme="https://www.nablepart.com/tags/Securities/"/>
    
    <category term="Investment" scheme="https://www.nablepart.com/tags/Investment/"/>
    
    <category term="Financial report" scheme="https://www.nablepart.com/tags/Financial-report/"/>
    
    <category term="GDP" scheme="https://www.nablepart.com/tags/GDP/"/>
    
  </entry>
  
  <entry>
    <title>Munger:Better to Look Stupid Than Broke</title>
    <link href="https://www.nablepart.com/c929438f3804/"/>
    <id>https://www.nablepart.com/c929438f3804/</id>
    <published>2023-11-03T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Recently, Munger, who turns 100 in two months, was interviewed by the technology podcast Acquired for the first time in the form of a podcast, talking about his investment philosophy and the thinking mode behind his decisions over the past 99 years, and also including his views on the hot topics of the moment: what’s wrong with today’s global securities market, how to look at investment opportunities in China, and a few suggestions for the young people.</p><p><img src="https://cdn.jsdelivr.net/gh/Mu1sezz/Picture@img/img/20231104192135.png"></p><p>CITIC Books is the first to edit and share with you the wisdom of Munger, who is about to enter his 100th year of life.</p><h2 id="I-Rather-look-stupid-than-go-bankrupt"><a href="#I-Rather-look-stupid-than-go-bankrupt" class="headerlink" title="I. Rather look stupid than go bankrupt"></a>I. Rather look stupid than go bankrupt</h2><h3 id="Stock-trading-is-similar-to-gambling"><a href="#Stock-trading-is-similar-to-gambling" class="headerlink" title="Stock trading is similar to gambling**"></a>Stock trading is similar to gambling**</h3><p>For many Americans, retail stock trading is similar to gambling. In psychology, humans are followers of natural trends. People don’t know anything about companies or anything that has price fluctuations, and <strong>retail traders are simply chasing the price up and down</strong>. </p><p>The easiest trade is to know ahead of time what you know, they increase their leverage year after year for returns, making it higher and higher, trading more and more, but making smaller and smaller profits. Having huge leverage is the only way they can get these huge returns - and if you’re already rich, this will drive you crazy.</p><p><strong>When faced with an investment quagmire, we’d rather look stupid than broke.</strong> If I were running the world, I’d tax short term traders heavily and drive those speculators out of the market.</p><h3 id="II-Real-estate-only-gets-two-or-three-chances-in-a-century"><a href="#II-Real-estate-only-gets-two-or-three-chances-in-a-century" class="headerlink" title="II. Real estate only gets two or three chances in a century"></a>II. Real estate only gets two or three chances in a century</h3><p>In good business, every decision is easy and you don’t even have to think about it; in bad business, every decision is difficult and you are always in and out and struggling. Berkshire’s investment in real estate in Japan is demonstrably good business.</p><p>If you’re as smart as Warren Buffett, the idea springs to mind two or three times per century. Interest rates in Japan have only been 0.5 per cent a year for a decade. And the companies are really entrenched, old-fashioned companies that own all these cheap copper mines and rubber bases, so you can borrow all that money ten years in advance, go buy shares and pay a 5% dividend.</p><p>A lot of cash flow without having to invest or think about it, and how often do you find an opportunity like that? You’d be lucky enough to get one or two such opportunities in a century. We can sense that no one else but Berkshire could have done it, and it took a long time to invest $10 billion. But it’s like God opened the window, and making money is very easy indeed.</p><h3 id="Entrepreneurs-tend-to-hate-VCs"><a href="#Entrepreneurs-tend-to-hate-VCs" class="headerlink" title="Entrepreneurs tend to hate VCs"></a>Entrepreneurs tend to hate VCs</h3><p>I think it’s almost impossible to make VC investments over and over again. “All the oil gets piping hot”, you have to make decisions quickly, and people involved in VC (venture capital) are gambling.</p><p>Venture capital doesn’t work as well for society as it should. If I could design the perfect financing system, it would have to be a legitimate business first and foremost, and by developing the right people to tap into their power, I could help them run their business, but not interfere with them too much.</p><p>Overall, after a lot of exposure to people from VCs, entrepreneurs actually tend to hate the VCs. they don’t feel that the VCs are their partners, they don’t feel that the VCs are helping them. Instead, they think that VCs only care about their own interests.</p><p>At Berkshire, we won’t sell even if some jerk banker offers a 20x P&#x2F;E on some bad business. If it’s a problem business that we’ve been unable to fix, we’ll sell. Sticking to what helps us has been a Berkshire principle for years. You don’t want to make money by cheating investors, which is what a lot of venture capital does.</p><h2 id="The-next-20-years-China-is-a-rich-mine-of-investment"><a href="#The-next-20-years-China-is-a-rich-mine-of-investment" class="headerlink" title="The next 20 years, China is a rich mine of investment"></a>The next 20 years, China is a rich mine of investment</h2><p>For China, you’re really seeing a whole big country, going from agrarian poverty to modern civilisation in a very short period of time, and no big country can develop that fast, a small country like Singapore did it, but no big country does.</p><p>My view on investing in China is that <strong>one, the Chinese economy has better prospects than almost any other large economy over the next 20 years; and two, the giant companies in China are stronger and much cheaper than anywhere else</strong>.  So naturally, I’m willing to take some risk from Chinese companies in my portfolio, whether it’s 18% or higher, it’s not a problem for me.</p><p>Plus, I’m kind of a “hardcore BYD fan” and they keep me on my toes while being aggressive at the same time. We can’t tell who will win with the hot money in the auto industry. All the answers are still up in the air, and there are probably only one or two brands that will make money.</p><p>Wang Chuanfu is a natural engineer and production manager, and I’ve never seen anyone like him who can make anything happen.</p><p>He has a PhD in engineering, and compared to Musk, Wang Chuanfu is much better at hands-on practical work and closer to the roots of manufacturing. When he sees someone else’s part, he can manufacture that part, and so efficiently that he can look at it in the morning and make it in the afternoon.</p><p>It’s a rich mine: it’s valuable to have so many talented people in one place, and it solves almost all the problems that electric cars have in terms of motors, acceleration, brakes, and so on.</p><h2 id="III-My-favourite-company-Costco"><a href="#III-My-favourite-company-Costco" class="headerlink" title="III. My favourite company: Costco"></a>III. My favourite company: Costco</h2><p>My top three investments are Berkshire, Lee Records, and Costco. warren once joked that “two terrorists hijacked the plane we were on, claiming they could grant one last wish before executing us, only to have Munger say can I tell you what’s so great about Costco one more time, and me say, kill me first! “</p><p>Costco’s strengths are many:</p><ul><li><p>Trying to save customers money in every way possible, if you see an item at Costco you are pretty sure you are getting the best price;  </p></li><li><p>Employee wages are almost twice as much as the competition, with employees who push carts or stock in the car park making over $22 an hour;</p></li><li><p>No mass sales, special benefits for consumers in the form of reward points to build up a reputation as a member;</p></li><li><p>Exceptionally spacious parking spaces, all of which are 10 feet wide;</p></li><li><p>Not letting vendors get paid until the item is sold.</p></li></ul><p>Costco’s low SKU counts and high inventory turns put these advantages together perfectly, and it’s the magic of the business model and corporate culture that requires great execution to make it work. You have to really get down to business and then keep sticking to the basics every day, every week, every year for 40 years, and it’s not that easy. Buy a great company like Costco and don’t bother thinking about exiting.</p><h2 id="IV-A-young-man-knows-the-rules-an-old-man-knows-the-exceptions"><a href="#IV-A-young-man-knows-the-rules-an-old-man-knows-the-exceptions" class="headerlink" title="IV A young man knows the rules, an old man knows the exceptions"></a>IV A young man knows the rules, an old man knows the exceptions</h2><h3 id="On-investing-always-bet-heavily-on-the-best-investments"><a href="#On-investing-always-bet-heavily-on-the-best-investments" class="headerlink" title="On investing: always bet heavily on the best investments"></a>On investing: always bet heavily on the best investments</h3><p>Take a card, in this card, can only play 20 holes, each hole represents an investment, make an investment, play a hole. 20 holes are played, a lifetime of investment opportunities are used up. For a mindful and disciplined investor, making only 20 investments in a lifetime will surely result in a better rate of return in the end.</p><p>When you hold a particular stock for 5 years, you may slowly get into it or understand it better. When you’re sure you’re right, be sure to bet heavily on the best investments, even though the best bets are more or less heavy. This is not something business school will teach -<strong>read more, think more, see more of the world</strong>. </p><p><strong>On work: put in intellectual effort, work hard, and pray for good luck</strong></p><p>You have to work hard to try to make yourself rich, but what you sell has to be useful to someone else who would buy it if you were a customer instead. <strong>Don’t make money by selling things that are bad for people,</strong> if you do, no amount of money earned from the virtues you’ve thrown away will help. If I were to start all over again, I would give just as much intellect, work just as hard, and pray for good luck as I do now.</p><p><strong>On partners: a good partnership is one that lets everyone play to their strengths</strong></p><p>Most partnerships that work well over time don’t deliver. The best partners still need to like each other, but I wouldn’t use any one formula to describe the relationship. It’s best to let everyone excel and at the very least like what they’re doing.</p><p><strong>On family: building trust with a good spouse</strong></p><p>You have to have a happy family, and as I always say, “The best way to have a good spouse is to have one you deserve”, and it is essential to build trust with your spouse, especially when it comes to educating your children, as well as to have a good relationship with everyone and to help each other through the difficult times.</p><h2 id="V-100th-Birthday-Plan"><a href="#V-100th-Birthday-Plan" class="headerlink" title="V. 100th Birthday Plan"></a>V. 100th Birthday Plan</h2><p><strong>Last question:</strong> Charlie, turning 100 in two months, any plans?</p><p><strong>Charlie Munger:</strong> I’m going to California for a big party.</p>]]></content>
    
    
    <summary type="html">In this blog, I&#39;m going to share Charlie Munger shares his investment philosophy and views on current hot topics</summary>
    
    
    
    <category term="Finance" scheme="https://www.nablepart.com/categories/Finance/"/>
    
    
    <category term="News" scheme="https://www.nablepart.com/tags/News/"/>
    
  </entry>
  
  <entry>
    <title>The pessimistic expectations for the logistics sector are gradually dissipating, and the express sub-industry is hovering at the bottom, waiting for signals of recovery in the supply chain.</title>
    <link href="https://www.nablepart.com/e46a500612b5/"/>
    <id>https://www.nablepart.com/e46a500612b5/</id>
    <published>2023-11-02T16:23:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>With the rapid development of the real economy and the return of value to the recovery chain, the future Chinese logistics market will also grow rapidly. The development of the industry will mainly be reflected in<br>the total volume and efficiency levels. Investment in the logistics industry will also be promoted along these two lines. The sector is mainly composed of three types of assets: cyclical, growth, and infrastructure.<br>Among them, the core elements of the express delivery sector are industry concentration and single ticket prices.</p><p>Previously, when the market was under pressure, the transportation sector could often outperform the market. Due to differences in the business models of the three types of assets, there are corresponding opportunities<br>under different market conditions. However, as of October 26th, the Shenwan Transportation Index has fallen by 14.86% this year, ranking eighth among Shenwan’s first-level industries. Among them, the Shenwan Logistics<br>sub-industry in the growth category has fallen by 24.26%, underperforming the Shanghai and Shenzhen 300 Index.</p><p>As an important link in the recovery chain, it is widely believed that express delivery logistics, which has a fast repair speed after opening up, will usher in opportunities. China’s C-end belongs to e-commerce-driven,<br>with 90% of current parcel volume coming from e-commerce and 10% from business and personal parcels. On the one hand, the demand side has high resilience, and express delivery logistics is mainly used for transporting<br>clothing, daily necessities, and home appliances. Secondly, the supply side has great elasticity, and transportation capacity has quickly recovered in the post-epidemic era. Following this logic, the long-term returns of<br>the express delivery industry are worth looking forward to, after all, the industry itself has a natural monopoly attribute, and economic laws determine that the industry can see an increase in concentration.</p><p>However, this year’s economy has shown weak recovery so far, mainly reflected in the weakness of real estate as the most important driving force for total demand, which to some extent has dragged down the performance of<br>related industries in the construction chain. However, with recent moves by China Investment Corporation to increase holdings of ETFs, approval for issuing special bonds worth trillions of yuan, and a new round of share<br>buybacks by listed companies, the prospects for the recovery chain are promising, and the logistics sector is waiting for signals of recovery.</p><p>Looking at the price aspect, due to the industry’s off-season, in September, the single ticket income of the Tongda system showed a year-on-year decline, with Yunda and Shentong’s single ticket income declining by more<br>than 10% year-on-year. However, in the fourth quarter, as the peak season, express e-commerce items began to increase in price, and the express grain-producing areas (referring to areas where express delivery costs are<br>lower than express prices) have basically completed their repairs. For example, since the price increase on September 1st, the prices in the Chaoshan area have been stable. Some brands in the Guangdong, Shenzhen, and Yiwu<br>areas have also increased their base prices, and the industry’s price environment has rebounded.</p><p>In terms of subsidiaries, YTO’s single ticket income fell by more than 7% year-on-year and remained flat month-on-month. SF Express, Yunda, YTO, and Shentong achieved single ticket income&#x2F; year-on-year growth rates of<br>17.21 yuan&#x2F;-4.12% (excluding Fengwang), 2.29 yuan&#x2F;-12.93%, 2.34 yuan&#x2F;-7.32%, and 2.11 yuan&#x2F;-13.52%, respectively. In the same month, SF Express, Yunda, YTO, and Shentong’s single ticket income increased by 0.67 yuan,<br>0.12 yuan, 0.00 yuan, and 0.01 yuan month-on-month, respectively.</p><p>Affected by the triple pressure of off-season demand, price, and cost, the quarterly performance of e-commerce express delivery in the third quarter is expected to fall compared to the previous quarter. The stock prices<br>of listed companies may be affected by this factor in the short term. However, the current market’s pessimistic expectations for the express delivery sector are relatively sufficient, and there is limited downside<br>potential.</p><p>Looking at the third-quarter performance report, YTO Express achieved operating income of 40.759 billion yuan in the first three quarters, a year-on-year increase of 4.98%; the net profit attributable to shareholders was<br>2.659 billion yuan, a year-on-year decrease of 4.06%. The reason for this is that the company’s financial data is not good due to fierce industry price competition and a decline in average item price. In addition, with<br>more capacity being put into operation in the peak season of the fourth quarter for e-commerce express delivery, cost pressures are suppressing profitability. In terms of valuation, Wind’s Express Index has a P&#x2F;E ratio of<br>about 17.74 times, which is currently at a low level.<br>From top to bottom, investors should primarily focus on the industry’s volume growth rate and cost reduction. “High growth + cost reduction” is expected to become the key driver of performance for express delivery<br>companies. From bottom to top, there are potential opportunities for individual stocks within the sector, including industry leaders such as ZTO Express and YTO Express, second-tier players like SF Holding, turnaround<br>candidates like Yunda Holding, latecomers like STO Express, and companies specializing in large items like Deppon Express. However, specific portfolio allocation requires a dynamic understanding of changes in the relative<br>competitiveness and profit prospects of major players in the industry.</p><p>Differentiation in the industry is an inevitable trend in the future, especially in the strong categories of various e-commerce platforms. For example, JD.com focuses on 3C electronics, while Pinduoduo focuses on fresh<br>produce and daily necessities. This leads to differences in average order value and corresponding delivery costs, resulting in a stratification of the express delivery industry. Based on this judgment and referencing the<br>situations in Europe, America, Japan, and South Korea, the future industry will likely be characterized by a “high-end leader, mid-to-high-end player, and mid-to-low-end player” scenario. With JD.com’s acquisitions of<br>Kuayue Express and Deppon Express, the high-end express delivery sector has entered a duopoly competition between “SF Group” and “JD Group”. However, the future listings of Jitu Express and Cainiao may have a significant<br>impact on the industry landscape. The current industry structure is still in a period of accelerated competition and is not yet stable. Price competition in certain regions and the overall pattern of e-commerce delivery<br>are about to become clear.</p><p>In the short term, trading behaviors such as racing to grab opportunities with speculative attributes become key marginal pricing factors. In the long term, investors need to identify companies where fundamental changes<br>have not been fully priced in, with a focus on whether their networks are stable and their financial reports are healthy. Only then can sufficient alpha be generated in the medium to long term. Taking September data as an<br>example, leading company SF Express (excluding Fengwang) achieved an overall volume growth rate of 20.5%, with express items (including returns) growing at a double-digit rate and traditional express items growing at a<br>high single-digit rate.</p>]]></content>
    
    
    <summary type="html">With the rapid development of the real economy and the return of value to the recovery chain, the future Chinese logistics market will also experience high-speed growth.</summary>
    
    
    
    <category term="Securities" scheme="https://www.nablepart.com/categories/Securities/"/>
    
    
    <category term="Stock market" scheme="https://www.nablepart.com/tags/Stock-market/"/>
    
    <category term="Securities" scheme="https://www.nablepart.com/tags/Securities/"/>
    
    <category term="Logistics sector" scheme="https://www.nablepart.com/tags/Logistics-sector/"/>
    
    <category term="China Logistics" scheme="https://www.nablepart.com/tags/China-Logistics/"/>
    
    <category term="Investment" scheme="https://www.nablepart.com/tags/Investment/"/>
    
  </entry>
  
  <entry>
    <title>How does a secondary handheld game keep people playing for 10 years?</title>
    <link href="https://www.nablepart.com/f7b8e1e0aa08/"/>
    <id>https://www.nablepart.com/f7b8e1e0aa08/</id>
    <published>2023-11-02T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/02/WF4axvIjrzdegmA.png" alt="image.png"></p><h2 id="Secondary-handheld-games-are-no-longer-a-young-genre"><a href="#Secondary-handheld-games-are-no-longer-a-young-genre" class="headerlink" title="Secondary handheld games are no longer a young genre."></a>Secondary handheld games are no longer a young genre.</h2><p>I still remember that in the distant 2016, Girls’ Frontier and Yin Yang Shi were launched one after another, and FGO landed in the country immediately afterward. At that time, everyone still stood in the subculture perspective to understand the secondary yuan handball game, thinking that its player group was a narrowly defined secondary yuan enthusiast, a niche group.</p><p>And in the not-so-distant 2018, Google for the first time singled out secondary yuan games in its own hand travel report as a major category alongside casual games and MMO games, which surprised many people a little.</p><p>The following year, Tomorrow’s Ark was released on all platforms. Another year later, Progenitor came down to earth.</p><p>Today, things are very different. The scale of secondary handheld game players has become so large that it is difficult for anyone to simply characterize it. Six or seven years ago, if someone asked “who is playing the second hand game”, with the word “otaku” answer will not be too big a deviation; and now, if someone raises the same question, have to find a market research company to do a year’s detailed investigation, in order to have an accurate answer. A new question arises.</p><p>A new question arises: how to keep so many different types of players playing - and playing - three or four years after the launch?</p><h2 id="why-can-you-play-a-secondary-yuan-handball-game-for-years"><a href="#why-can-you-play-a-secondary-yuan-handball-game-for-years" class="headerlink" title="why can you play a secondary yuan handball game for years?"></a>why can you play a secondary yuan handball game for years?</h2><p>At first glance, this question is too basic, and there should be plenty of previous experience to draw on. But if you think about it, casual games have too short a life cycle (usually three or four months) to be eligible to ask this question; MMO games live a long time, but their social attributes are too strong, and players often stay because of their friends, so their experience can’t be applied to the “weakly social” secondary handheld games.</p><p>Or let’s look at it from another angle, why can you play a secondary yuan handball game for years?</p><p>The most intuitive answer is: because there is always new content to play. This is, of course, more than true, but when it comes to handheld games specifically, it’s a little more complicated. Buyout single-player games also update their content, usually in the form of DLC, more often than half a year; MMOs and other end-games, too, launch new versions on a six-month to one-year cycle. The second hand game, the “regular activities” to the extreme.</p><p>Take FGO for example, its various theme activities are basically seamless. Under normal circumstances, once a month there will be a large-scale themed event containing new levels, plots and characters, lasting two to three weeks. Smaller update events, on the other hand, occur weekly, including check-in events with various names, preview events that set the stage for future mainlines, and commemorative events that link up with other IPs.</p><p><img src="https://s2.loli.net/2023/11/02/AF8kWCg4NdHryXK.png" alt="image.png"></p><p>In the image above, for example, FGO players spent the entire month of August experiencing the “Arctic - Summer World!” This large-scale themed event began in September with a Buddhist check-in that lasted for 2 weeks, as well as “Road to 7” which was a warm-up for the main story of Chapter 7, and then another large-scale themed event that lasted for three weeks.</p><p>The sparse and continuous regular events have become an unchangeable ancestor of secondary handheld games. The concept of the “grass growing period” has also been developed, where even if the game’s activities are stagnant for a month, players will be at a loss as to what to do and feel that the grass is about to grow.</p><p>In fact, the significance of regular activities goes beyond “giving players something to do”. In previous years, secondary handheld games often advertised themselves as a “catch-up experience”: large-scale themed events were usually accompanied by a large number of updates to the plot or character stories, which sparked discussion and even secondary creation in the community. Players who have been playing a game for a long time are like viewers who are following the same drama, they have something to talk about all the time - this not only retains old players, but also attracts new ones.</p><p>This aspect of the “Azure File” is a typical success story, 2021 began, I saw my friends in the QQ group every day to discuss it, it can be said that every time you update the plot or activities to come out to chat for half a month, so that I am more familiar with the Nichifumi, Shiraito, Yuuka these little girls than colleagues. Eventually the national service on-line, I obediently was transformed into a new player.</p><p><img src="https://s2.loli.net/2023/11/02/kZcaflBrD2hndbS.png" alt="image.png"></p><p>The Azure Files” before the launch of the foreign service has already relied on the activities of the plot in the video site has a low degree of heat</p><p>In addition to the frequency of updates, many secondary handheld games also try to add more new content to the gameplay. Take Tomorrow’s Ark as an example, tower defense, a core gameplay that seems to have little room for expansion, has also been made quite fancy.</p><p>The first is the Crisis Contract gameplay, which allows players to add various difficulty wordings - such as increasing the enemy’s offense and defense, limiting the number of their own characters on the field, and so on - in order to obtain more rewards. The same map can be played completely differently with different contracts, and players are free to customize the wordings based on their character’s reserves, which is a far cry from the traditional so-called “high difficulty levels”.</p><p><img src="https://s2.loli.net/2023/11/02/XhPfRDc2oOmpyQN.png" alt="image.png"></p><p><img src="https://s2.loli.net/2023/11/02/MktWFlvy14IdnZU.png" alt="image.png"></p><p>There’s also Integrated Strategy, which is pure Roguelike gameplay: there’s a permanent death mechanic, where the player has to regroup once the game is lost; there are randomly-generated levels, where the player encounters very different enemies each game; and there are replayable Collectibles, which provide benefits but require the player to fulfill various conditions to unlock them. They provide benefits but require players to fulfill various conditions to unlock them.</p><p><img src="https://s2.loli.net/2023/11/02/Zy2xb9kpWLSVqBI.png" alt="image.png"></p><p>Some collectibles are unlocked by reaching a specific ending.</p><p>In addition, Tomorrow’s Ark also experiments with a simulation strategy-centered habitat algorithm that allows players to play cooperatively with each other.</p><p><img src="https://s2.loli.net/2023/11/02/HoZfeqndhbFs1YL.png" alt="image.png"></p><p>The farming mode of Bio-Arithmetic</p><p>For more expandable 3D games, there’s even more that can be done. Starting with Crackdown 3, Miha Tour has been keen to include various mini-games in its regular campaigns, such as the breakout variety gameplay of Crackdown Bean Man, as well as Monopoly, Flight Shooter, Tower Defense, and other modes of play that are very different from the game itself.</p><p><img src="https://s2.loli.net/2023/11/02/UHcAkvrqVIuN9CL.png" alt="image.png"></p><p>Wipeout 3’s “Wipeout Bean Man”</p><p>The same applies to Harajuku God: the parkour game “Shen Gong Tian Qiao,” which supports customization of levels, the tower defense game “Organ Chess Tan,” the audio game “Leap Beat Response,” and the brick-breaking game “Akitsu Hajime.” ……</p><p><img src="https://s2.loli.net/2023/11/02/Y7qTWcaBnevwiPV.png" alt="image.png"></p><p>Most of the new gameplay in Harajinkami and Crackdown 3 doesn’t stay in the game as an inherent pattern, though, but rather is (temporarily) declared gone once the event is over.</p><p>The beauty of this approach is that for a game that has been running for several years, the addition of new gameplay doesn’t necessarily make players buy it. After all, like we said at the beginning: the player base of secondary handheld games is now so large and complex. It’s hard for a new playstyle to keep all the players already there happy.</p><p>But if the developers are afraid to move forward because of this complexity, and only do the gameplay content that has already been proven, repeating the sad cycle of the “skinned hand tour” of many years ago, can it really make the players satisfied to play for 10 years?</p><h2 id="The-Original-God-and-KFC-linked-up"><a href="#The-Original-God-and-KFC-linked-up" class="headerlink" title="The Original God and KFC linked up"></a>The Original God and KFC linked up</h2><p>Excellent new in-game content can retain old players, but in order to continue to win the hearts of new players after three or four years on the market, it is necessary to rely on a lot of factors outside the game, such as the community atmosphere and external publicity.</p><p>Let’s start with external publicity: in 2021, The Original God and KFC linked up, and the shameful slogan of “meet in other worlds and enjoy the food” echoed throughout KFC stores, and various terrier pictures, emoticon packs, and secondary creation segments derived from it echoed throughout the Internet. Proto-God players traded in their social deaths for an explosive circle-breaking.</p><p>The linkage and even promotional strategies of “Proto-God” have since followed a similar tenet - go to restaurants, go to tourist attractions, go to the most common places for the general public. Today, it’s hard to argue that the fact that ProtoGod has the widest player base of any secondary handheld game has nothing to do with it - even people who don’t play the game at all have more or less seen ProtoGod’s promotional materials on the streets.</p><p>Linking up with popular brands isn’t the only form of linkage. Tomorrow’s Ark has gone the other way: it has chosen brands that share a similar tone and style - even if the target is not so popular.</p><p>As we’ve reported before, Tomorrow’s Ark and the World Wide Fund for Nature (WWF) had a partnership, launching a character based on a porpoise, with all proceeds from the campaign going to the WWF. More recently, Tomorrow’s Ark has linked up with National Geographic China to launch several fashions. Both of these link-ups are related to animal protection, and the acceptance of the player base has been high, with basically no negative public opinion - after all, the animal elements of Tomorrow’s Ark are deep in its bones, and most of the characters that players have carefully cultivated have their own animal archetypes.</p><p><img src="https://s2.loli.net/2023/11/02/J4C8qET5BYAIDHu.png" alt="image.png"></p><p>In addition to this, Tomorrow’s Ark has also been linked with Monster Hunter and Rainbow Six. These IPs have special themes, yet maintain subtle stylistic consistency and a considerable degree of overlap in the player base. In the live broadcast not long ago “Tomorrow’s Ark” also announced the second linkage with R6, both a replica of the last linkage and a new linkage story, so it can be seen that the last time was good.</p><p><img src="https://s2.loli.net/2023/11/02/dhB68WqFzlsb1Cr.png" alt="image.png"></p><p>Speaking of Tomorrow’s Ark, it’s also good to talk about the appeal of the community atmosphere to new players. Our longtime readers may still remember that Yu Yansha has actually written quite a few analysis articles on this game, and my own first Tomorrow’s Ark article mentioned that it has an inclusive community atmosphere for its fellow players.</p><p>This kind of inclusive community atmosphere is not only in the community of peers, the entire Tomorrow’s Ark creator community ecology is quite good. The creators have a healthy feedback relationship with the officials - starting in 2021, the Terra Exploration Society will organize an offline exchange every year, inviting a number of creators to participate.</p><p>The first year’s offline exchange was held in an art gallery, and the officials framed the creators’ masterpieces and displayed them as artwork, which made quite a few of the creators quite touched.</p><p>A good community atmosphere also means a lot to new players. Tomorrow’s Ark has tons of strategy videos, and there are a variety of different configurations for any event or level, which greatly lowers the difficulty threshold for tower defense play. I benefited a lot from this during my newbie days, and if there weren’t so many players willing to create strategy videos, it’s hard to say if I would have dropped out of the game because of a difficult level before my character lineup was formed.</p><h2 id="what-does-a-secondary-handheld-game-have-to-do-in-the-end-in-order-to-let-players-play-for-10-years"><a href="#what-does-a-secondary-handheld-game-have-to-do-in-the-end-in-order-to-let-players-play-for-10-years" class="headerlink" title="what does a secondary handheld game have to do in the end in order to let players play for 10 years"></a>what does a secondary handheld game have to do in the end in order to let players play for 10 years</h2><p>Having said all this, it still seems difficult to answer the question at the beginning of the article: what does a secondary handheld game have to do in the end in order to let players play for 10 years. All the “routines” seem to have several directions: to do new content, can be a high frequency of plot updates, can also be the continuous expansion of the existing gameplay, can also be a small game simulator; to do publicity, and the popular brand linkage is very good, and the style of similar niche IP cooperation is also good … … …</p><p>The ideal would be “I want it all”, but even a game like Progenitor, which has plenty of resources, can only go in a few of these directions. Honestly, it’s fine if a game doesn’t last a decade, most games in the world don’t. But as a gamer, I hope that developers continue to strive for this - to take what’s right, and only get what’s right. If a game is going to run for 10 years, whether or not it can actually achieve that goal, at least in the moment, it’s going to give me peace of mind.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Secondary handheld games are no longer a young genre.</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="Secondary" scheme="https://www.nablepart.com/tags/Secondary/"/>
    
    <category term="Handheld game" scheme="https://www.nablepart.com/tags/Handheld-game/"/>
    
    <category term="Girls&#39; Frontier" scheme="https://www.nablepart.com/tags/Girls-Frontier/"/>
    
    <category term="Yin Yang Shi" scheme="https://www.nablepart.com/tags/Yin-Yang-Shi/"/>
    
    <category term="Tomorrow&#39;s Ark" scheme="https://www.nablepart.com/tags/Tomorrow-s-Ark/"/>
    
    <category term="Progenitor" scheme="https://www.nablepart.com/tags/Progenitor/"/>
    
  </entry>
  
  <entry>
    <title>Linux System Security Hardening with SELinux - A Comprehensive Guide</title>
    <link href="https://www.nablepart.com/fcf480f7d6ab/"/>
    <id>https://www.nablepart.com/fcf480f7d6ab/</id>
    <published>2023-11-02T07:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>In today’s digital landscape, ensuring the security of Linux systems is of paramount importance. Threats to sensitive data and unauthorized access are constant concerns for organizations and individuals alike. One effective method for enhancing Linux system security is through the use of SELinux (Security-Enhanced Linux). SELinux is a security mechanism that implements Mandatory Access Control (MAC) in Linux systems, providing granular access control over system resources and enhancing its defense capabilities.<br>In this comprehensive guide, we will delve into the concepts, installation, configuration, and management of SELinux. We will explore how SELinux operates, its key features, and the steps involved in leveraging SELinux for Linux system security hardening. By the end, you will have a solid understanding of how to utilize SELinux to bolster the security of your Linux systems.</p><h2 id="Understanding-SELinux-Concepts-and-Principles"><a href="#Understanding-SELinux-Concepts-and-Principles" class="headerlink" title="Understanding SELinux: Concepts and Principles"></a>Understanding SELinux: Concepts and Principles</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231101135023.png"></p><p>Before diving into the specifics of using SELinux for system security hardening, let’s gain a fundamental understanding of its concepts and principles. SELinux operates on the basis of Mandatory Access Control (MAC), running on the Linux kernel. Unlike traditional Discretionary Access Control (DAC), SELinux provides fine-grained security control by assigning a unique security label to each system resource, such as files, devices, and processes. By defining detailed permission rules in its policy, SELinux limits process access to resources. This level of control ensures that even in the event of a compromised process or vulnerability, attackers find it challenging to gain unauthorized access to system resources and sensitive information.</p><h2 id="Installing-and-Enabling-SELinux"><a href="#Installing-and-Enabling-SELinux" class="headerlink" title="Installing and Enabling SELinux"></a>Installing and Enabling SELinux</h2><p>In most Linux distributions, SELinux is already installed and enabled by default. To verify the status of SELinux, you can use the command sestatus. If SELinux is not installed or enabled, you can install the necessary software packages and configure the settings by editing the configuration files.</p><h2 id="SELinux-Policy"><a href="#SELinux-Policy" class="headerlink" title="SELinux Policy"></a>SELinux Policy</h2><p>Central to SELinux is its policy, which defines the rules and permissions for security contexts. Policy files are typically located in the &#x2F;etc&#x2F;selinux directory and can be managed and configured using specific tools. The policy governs the behavior of SELinux, ensuring that resources are accessed securely based on their security contexts.</p><h2 id="Setting-SELinux-Labels"><a href="#Setting-SELinux-Labels" class="headerlink" title="Setting SELinux Labels"></a>Setting SELinux Labels</h2><p>In SELinux, every system resource has a unique security context label. For files and directories, you can use the ls -Z command to view and modify the security context labels. Similarly, the ps -eZ command allows you to view the security context of processes. SELinux automatically assigns the appropriate security context labels when resources are created.</p><h2 id="SELinux-Policy-Modes"><a href="#SELinux-Policy-Modes" class="headerlink" title="SELinux Policy Modes"></a>SELinux Policy Modes</h2><p>SELinux operates in three different policy modes: Enforcing, Permissive, and Disabled. In Enforcing mode, SELinux strictly enforces policy rules and logs any violations. Permissive mode also enforces policy rules but only generates warning messages instead of blocking operations. In Disabled mode, SELinux does not apply any policy rules.</p><h2 id="Configuring-SELinux-Policy"><a href="#Configuring-SELinux-Policy" class="headerlink" title="Configuring SELinux Policy"></a>Configuring SELinux Policy</h2><p>Configuring SELinux policy can be done by modifying policy files or using command-line tools. Modifying policy files requires familiarity with the policy language and rules. Alternatively, command-line tools such as setsebool, semanage, and restorecon can be used to set boolean values, manage policy modules and ports, and restore the security context of files, respectively.</p><h2 id="SELinux-Logging"><a href="#SELinux-Logging" class="headerlink" title="SELinux Logging"></a>SELinux Logging</h2><p>SELinux logs operations that violate policy rules to the system logs. Tools such as ausearch, sealert, and audit2allow can be used to view and analyze SELinux logs, providing insights into security events and policy violations within the system.</p><h2 id="Managing-SELinux-Contexts"><a href="#Managing-SELinux-Contexts" class="headerlink" title="Managing SELinux Contexts"></a>Managing SELinux Contexts</h2><p>SELinux uses security context labels to identify and manage resources. A security context consists of three parts: user, role, and type. The chcon command allows you to change the security context of files or directories, while semanage fcontext is used to configure file contexts persistently.</p><h2 id="SELinux-and-Service-Management"><a href="#SELinux-and-Service-Management" class="headerlink" title="SELinux and Service Management"></a>SELinux and Service Management</h2><p>Many services in Linux systems run in separate processes, each with its specific security context. Enabling SELinux requires configuring the security context of services to ensure their proper functioning and interaction with other resources. The semanage command facilitates the management of policy modules and ports related to services.</p><h2 id="SELinux-and-Application-Development"><a href="#SELinux-and-Application-Development" class="headerlink" title="SELinux and Application Development"></a>SELinux and Application Development</h2><p>Custom-developed applications need to be configured with SELinux policies when SELinux is enabled. This process involves defining the required security context types and access rules for the application and utilizing tools to check and debug the interaction between the application and SELinux.</p><h2 id="SELinux-and-Auditing"><a href="#SELinux-and-Auditing" class="headerlink" title="SELinux and Auditing"></a>SELinux and Auditing</h2><p>In addition to enforcing policy rules, SELinux incorporates an auditing mechanism to record operations that violate policy rules. Tools are available to analyze and audit SELinux logs, providing insights into potential security risks and threats within the system.</p><h2 id="SELinux-and-File-Contexts"><a href="#SELinux-and-File-Contexts" class="headerlink" title="SELinux and File Contexts"></a>SELinux and File Contexts</h2><p>SELinux uses file contexts to identify the security attributes of files and directories. Incorrect or altered file contexts can pose security risks to the system. The restorecon command is used to restore the security context of files, ensuring their integrity and security.</p><h2 id="SELinux-and-Network-Security"><a href="#SELinux-and-Network-Security" class="headerlink" title="SELinux and Network Security"></a>SELinux and Network Security</h2><p>SELinux can play a significant role in protecting the network security of a system. It restricts process access to network resources and defines detailed access rules and policies to prevent unauthorized network connections and data transfers. By leveraging SELinux, Linux system security can be further strengthened.</p><h2 id="conclusion"><a href="#conclusion" class="headerlink" title="conclusion"></a>conclusion</h2><p>In conclusion, utilizing SELinux for Linux system security hardening is an effective approach. Through granular access control and policy management, SELinux provides an additional layer of security, helping protect systems and sensitive data. However, using SELinux requires understanding and configuration, necessitating research, documentation study, and practical learning. To ensure system security, it is crucial to combine SELinux with other security measures such as firewalls, security auditing, and vulnerability management, forming a comprehensive security solution.<br>With this comprehensive guide, you now possess the knowledge and tools to implement SELinux for Linux system security hardening effectively. Embrace the power of SELinux and safeguard your systems from potential threats and unauthorized access.</p>]]></content>
    
    
    <summary type="html">In this comprehensive guide, we will take an in-depth look at the concepts, installation, configuration, and management of SELinux. We will explore how SELinux operates, its main features, and the steps to take to harden your Linux system security with SELinux. By the end, you will have a solid understanding of how to utilize SELinux to strengthen the security of your Linux system.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Linux" scheme="https://www.nablepart.com/tags/Linux/"/>
    
    <category term="SELinux" scheme="https://www.nablepart.com/tags/SELinux/"/>
    
  </entry>
  
  <entry>
    <title>Astro - The Best Web Framework for 2023</title>
    <link href="https://www.nablepart.com/a45d2fbd3de8/"/>
    <id>https://www.nablepart.com/a45d2fbd3de8/</id>
    <published>2023-11-02T06:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231102144022.png"></p><p>In today’s fast-paced digital world, where distractions are abundant and internet browsing is predominantly done on mobile devices, speed and page loading time are crucial factors. Astro, a web framework that can be used as a Static Site Generator (SSG) or a simple backend for rendering non-SPA pages, has emerged as a powerful tool in this context. Astro allows developers to build content-focused websites in a component-based manner, making it a versatile and efficient solution for web development.</p><h2 id="The-Rise-of-Astro"><a href="#The-Rise-of-Astro" class="headerlink" title="The Rise of Astro"></a>The Rise of Astro</h2><p>Astro is a modern static site generator and frontend framework that gained significant popularity in 2022. In fact, it ranked 7th among JavaScript star projects, accumulating an impressive 15k stars within just one year. What sets Astro apart is its creator, Fred K. Schott, the mastermind behind Snowpack, an influential unbundling build tool. With such a strong foundation, Astro has quickly caught the attention of developers and is considered by some as the best web framework for 2023.</p><h2 id="The-Problem-with-Excessive-JavaScript"><a href="#The-Problem-with-Excessive-JavaScript" class="headerlink" title="The Problem with Excessive JavaScript"></a>The Problem with Excessive JavaScript</h2><p>The web development landscape has been evolving rapidly, especially for JavaScript front-end developers. However, the rapid pace of change sometimes leads us to overlook the ultimate goal of creating websites and web applications: serving the users. With the rise of Single-Page Applications (SPAs) like Vue and React, there has been a tendency to create SPAs for almost everything, even for simple content-based websites.</p><p>While SPAs have their merits for building web applications, they come with their own set of challenges. Firstly, backend frameworks started optimizing for REST API responses, neglecting the rendering of HTML with template engines, especially in Node.js. Secondly, SPAs, being client-side rendered, present difficulties for search engine optimization (SEO) as search engine crawlers cannot see the content during indexing.</p><p>To address these challenges, server-side rendering (SSR) was introduced, allowing the execution of client-side JavaScript on the server for initial rendering. However, this approach requires a Node.js server, which can be cumbersome and costly, particularly for content-based websites. This led to the emergence of Static Site Generators (SSGs) and pre-rendering as a solution. SSGs existed even before the popularity of SPAs but gained traction due to the aforementioned challenges.</p><h2 id="The-Limitations-of-Existing-SSGs"><a href="#The-Limitations-of-Existing-SSGs" class="headerlink" title="The Limitations of Existing SSGs"></a>The Limitations of Existing SSGs</h2><p>However, existing SSGs had their own limitations. Some were written in languages other than JavaScript, making it difficult to share UI components across different projects. Others, built with JavaScript using frameworks like Vue, React, or Svelte, resulted in excessive JavaScript due to hydration, which is unnecessary for every page. This is where Astro comes into play.</p><p>Astro was designed to address the limitations of traditional SSGs. One of its primary goals is to make it nearly impossible to build a slow website. In fact, tests have shown that Astro websites achieve a 40% improvement in loading speed compared to websites built with React Web frameworks. Moreover, Astro reduces the size of JS code by a staggering 90%.</p><h2 id="The-Astro-Solution"><a href="#The-Astro-Solution" class="headerlink" title="The Astro Solution"></a>The Astro Solution</h2><p>Astro started as a JavaScript-based SSG that does not generate JavaScript by default on the client-side. Instead, it executes JS code during the build process, similar to SSR frameworks, but without hydration since most content-based websites do not require JS. However, when JS is needed, Astro provides flexible options.<br>You can continue using JavaScript as before, employing imperative DOM operations. Alternatively, you can leverage lightweight libraries like AlpineJS or Vue-petite, which provide minimal JS functionality. For advanced scenarios or when reusing UI components from other projects, Astro introduces “Islands.” These Islands are standalone components that can be imported from Vue, React, Svelte, and other frontend frameworks. They are individually rendered and injected into the final HTML, either statically (without hydration) or dynamically (with JS).</p><p>Unlike frameworks like Nuxt or Next.js, where nothing is static after the page loads due to full hydration, Astro generates truly static content. This means that unnecessary JavaScript injection is avoided, resulting in faster and more efficient websites. Additionally, Astro now supports SSR, allowing it to function as a simple backend framework with support for the best template engines available.</p><h2 id="Why-Astro-is-the-Best-Web-Framework-for-2023"><a href="#Why-Astro-is-the-Best-Web-Framework-for-2023" class="headerlink" title="Why Astro is the Best Web Framework for 2023"></a>Why Astro is the Best Web Framework for 2023</h2><p>In a world full of distractions, where browsing predominantly happens on mobile devices, speed and page loading are paramount. Astro, as a web framework, can be used as a Static Site Generator (SSG) or a simple backend for rendering non-SPA pages. Its versatility and extensive features make it an ideal choice for web development in 2023.<br>Here are some key reasons why Astro stands out as the best web framework:</p><h3 id="Universal-Template-Engine"><a href="#Universal-Template-Engine" class="headerlink" title="Universal Template Engine"></a>Universal Template Engine</h3><p>Astro supports external components from Vue, React, Svelte, Lit, Preact, and Solid JS, making it the most flexible and universal template engine available. This allows easy reuse of component representations across different projects.</p><h3 id="Powerful-Routing-and-Query-Support"><a href="#Powerful-Routing-and-Query-Support" class="headerlink" title="Powerful Routing and Query Support"></a>Powerful Routing and Query Support</h3><p>Astro provides file-based URL parameter routing and query support, enabling developers to create dynamic and personalized web experiences.</p><h3 id="Image-Optimization-and-Transformation"><a href="#Image-Optimization-and-Transformation" class="headerlink" title="Image Optimization and Transformation"></a>Image Optimization and Transformation</h3><p>Astro offers built-in image optimization and transformation capabilities, improving website performance by reducing image size while maintaining visual quality.</p><h3 id="Markdown-and-Frontmatter-Support"><a href="#Markdown-and-Frontmatter-Support" class="headerlink" title="Markdown and Frontmatter Support"></a>Markdown and Frontmatter Support</h3><p>Astro supports Markdown and frontmatter, allowing developers to seamlessly integrate rich content and metadata into their websites.</p><h3 id="CSS-Scope-and-SASS-Support"><a href="#CSS-Scope-and-SASS-Support" class="headerlink" title="CSS Scope and SASS Support"></a>CSS Scope and SASS Support</h3><p>With built-in CSS scope and SASS support, Astro empowers developers to create modular and maintainable stylesheets for their web projects.</p><h3 id="Script-Tag-Scope-and-Binding"><a href="#Script-Tag-Scope-and-Binding" class="headerlink" title="Script Tag Scope and Binding"></a>Script Tag Scope and Binding</h3><p>Astro provides script tag scope and binding, making it easy to integrate custom elements (web components) into websites.</p><h3 id="Lazy-Loading-of-Images-and-Components"><a href="#Lazy-Loading-of-Images-and-Components" class="headerlink" title="Lazy Loading of Images and Components"></a>Lazy Loading of Images and Components</h3><p>Astro offers built-in lazy loading for both images and components, enhancing website performance by loading resources only when needed.</p><h3 id="Static-API-Endpoint-Support"><a href="#Static-API-Endpoint-Support" class="headerlink" title="Static API Endpoint Support"></a>Static API Endpoint Support</h3><p>Astro supports static API endpoints, enabling seamless integration with external data sources or services.</p><h3 id="Multiple-Runtimes"><a href="#Multiple-Runtimes" class="headerlink" title="Multiple Runtimes"></a>Multiple Runtimes</h3><p>Astro supports multiple runtimes, including Node.js, Deno, and Bun. This flexibility allows developers to choose the runtime that best suits their project requirements.</p><h3 id="Easy-Deployment-to-Major-Hosting-Platforms"><a href="#Easy-Deployment-to-Major-Hosting-Platforms" class="headerlink" title="Easy Deployment to Major Hosting Platforms"></a>Easy Deployment to Major Hosting Platforms</h3><p>Astro can be easily deployed to popular web hosts, including edge providers like Netlify, Vercel, Cloudflare, Firebase, Surge, Render, Heroku, and more.<br>All of these features make Astro the ultimate tool for various web development needs, including event websites, checklists, tutorials, portfolios, marketing sites, video platforms, custom e-commerce sites, and even blogs or news websites. For simple SPAs, such as websites with a fixed audio player, Astro can be seamlessly integrated with Hotwire’s Turbo.<br>Now, with the new support for “View Transitions,” Astro can preserve the state during page navigation, providing a smooth user experience. Considering all these capabilities, it becomes evident why Astro is the best web framework for building content-focused websites in 2023.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Astro has emerged as a powerful web framework, offering developers a versatile solution for building efficient and high-performing websites. Its ability to optimize page loading speed and reduce JS code size makes it a standout choice in the ever-evolving web development landscape. With Astro’s extensive features, universal template engine, and easy deployment options, it is undoubtedly the best web framework for 2023. Embrace Astro and unlock its potential to create exceptional web experiences that captivate users in this fast-paced digital era.</p>]]></content>
    
    
    <summary type="html">Astro has become a powerful web framework that provides developers with a versatile solution for building efficient, high-performance websites. It is undoubtedly the best web framework of 2023. In this fast-paced digital age, embrace Astro and realize its potential to create extraordinary web experiences that engage users.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Astro" scheme="https://www.nablepart.com/tags/Astro/"/>
    
    <category term="Web framework" scheme="https://www.nablepart.com/tags/Web-framework/"/>
    
    <category term="2023 Best" scheme="https://www.nablepart.com/tags/2023-Best/"/>
    
  </entry>
  
  <entry>
    <title>Why Windows 11 is Struggling to Replace Windows 10</title>
    <link href="https://www.nablepart.com/72ee286a10f4/"/>
    <id>https://www.nablepart.com/72ee286a10f4/</id>
    <published>2023-11-02T01:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>In the ever-evolving landscape of operating systems, the transition from one version to another is often met with mixed reactions. Windows 11, the successor to the widely-used Windows 10, has faced challenges in gaining widespread acceptance from users. Despite being on the market for two years, Windows 11 has failed to achieve the same market share as its predecessor. This article delves into the reasons behind Windows 11’s struggle to replace Windows 10, exploring the hardware requirements, lack of compelling upgrade reasons, and the potential rise of alternative operating systems such as Linux.</p></blockquote><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231102142109.png"></p><h2 id="The-Market-Share-Battle"><a href="#The-Market-Share-Battle" class="headerlink" title="The Market Share Battle"></a>The Market Share Battle</h2><p>Windows 10 has maintained a dominant position in the global desktop market, with a market share of 71.64%, while Windows 11 lags behind at 23.61%. Despite some progress in recent months, Windows 11 has not been able to surpass Windows 10’s stronghold. This is evident not only in the general user base but also among gamers. According to Valve’s monthly Steam hardware and software survey, 57% of players still use Windows 10, while only 37% have migrated to Windows 11. It is clear that Windows 11 has not been able to sway users to upgrade from Windows 10.</p><h2 id="Unfavorable-Reception"><a href="#Unfavorable-Reception" class="headerlink" title="Unfavorable Reception"></a>Unfavorable Reception</h2><p>The low acceptance of Windows 11 can be attributed to two main factors: stringent hardware requirements and the lack of compelling reasons to upgrade. Many users have criticized the hardware requirements imposed by Microsoft, which have resulted in older CPUs and other components being incompatible with Windows 11. This limitation has deterred users from adopting the new operating system, as they do not see a significant performance improvement compared to Windows 10. One user aptly pointed out, “When you force new hardware to replace perfectly fine old hardware just to sell an operating system, adoption will stall.”<br>Furthermore, the user experience offered by Windows 11 does not present a compelling reason for users to upgrade. The functionality of Windows 11 is almost identical to that of Windows 10, leading users to question the necessity of the switch. As one early adopter of Windows 11 shared, “For end-users, there is almost no difference in functionality between Windows 10 and 11. So, I agree with both viewpoints: there is no compelling reason to switch, and the restriction on older hardware hampers adoption.”</p><h2 id="The-Inevitable-Demise-of-Windows-10"><a href="#The-Inevitable-Demise-of-Windows-10" class="headerlink" title="The Inevitable Demise of Windows 10"></a>The Inevitable Demise of Windows 10</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231102142223.png"></p><p>While many users resist the transition to Windows 11, Microsoft’s support for Windows 10 has a limited lifespan. Windows 11 may not be the enticing carrot for users, but the end of support for Windows 10 is the stick that will eventually force users to upgrade. This will inevitably lead to hardware upgrades, as Windows 10’s discontinuation will push businesses to invest in new devices. Lenovo’s Senior Vice President and President of the Intelligent Devices Group, Luca Rossi, predicts a surge in demand in the commercial sector when Windows 10 reaches the end of its life in 2024&#x2F;2025. The advent of AI-powered PCs may further drive sustained demand for Windows 11. However, the strict hardware requirements remain a significant obstacle to the widespread adoption of Windows 11.</p><h2 id="Bypassing-Hardware-Restrictions"><a href="#Bypassing-Hardware-Restrictions" class="headerlink" title="Bypassing Hardware Restrictions"></a>Bypassing Hardware Restrictions</h2><p>Despite the hardware limitations, resourceful users have discovered unofficial methods to bypass these restrictions and successfully install Windows 11 on unsupported machines. A user on Twitter shared a method involving adding the “&#x2F;product server” switch to the setup.exe file in the Windows 11 installation directory, allowing installation on almost any PC. While this workaround may encourage some users to upgrade, it is unlikely to replace Windows 10 as the dominant Windows operating system in the short term.</p><h2 id="Exploring-Alternatives-The-Linux-Opportunity"><a href="#Exploring-Alternatives-The-Linux-Opportunity" class="headerlink" title="Exploring Alternatives: The Linux Opportunity"></a>Exploring Alternatives: The Linux Opportunity</h2><p>As Windows 11 faces resistance, users are exploring options beyond the Windows ecosystem. Some users have expressed a willingness to switch to Mac or Linux, citing the demanding hardware requirements of Windows 11 as a motivation. Linux, in particular, has been mentioned as a potential alternative. The lightweight nature of Linux allows it to run on older machines, making it an attractive choice for those unable to purchase a Windows license. Additionally, Linux is often perceived as more private and secure. The availability of various Linux distributions, such as Ubuntu and PureOS, provides tailored solutions for specific needs. However, Linux adoption among general users can be hindered by a preference for plug-and-play convenience, compatibility with mainstream software, and a perception of technical complexity.</p><h2 id="The-Challenges-of-Switching-to-Linux"><a href="#The-Challenges-of-Switching-to-Linux" class="headerlink" title="The Challenges of Switching to Linux"></a>The Challenges of Switching to Linux</h2><p>While Linux offers advantages, including compatibility with older hardware and enhanced privacy and security, there are challenges to consider. Mainstream software, such as Adobe Creative Suite and Microsoft 365, may not be available on Linux, limiting its appeal to users who rely on these tools. The availability of software on Linux can be unpredictable, and the transition may require a learning curve. However, some users have successfully made the switch to Linux, accepting the need for a slight adjustment in workflow and software availability. The demand for Windows programs to run on Linux is a recurring request among users, highlighting the desire for a seamless transition.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Windows 11’s struggle to replace Windows 10 can be attributed to its stringent hardware requirements and the lack of compelling reasons for users to upgrade. While Windows 11 has made progress, Windows 10 continues to dominate the desktop market. As the end of support for Windows 10 approaches, hardware manufacturers anticipate increased demand, and the advancement of AI-powered PCs may drive the adoption of Windows 11. However, the strict hardware requirements remain a significant obstacle. In the face of these challenges, some users are considering Linux as an alternative, drawn to its lightweight nature and tailored solutions. While Linux adoption presents its own set of challenges, it offers an opportunity for users seeking alternatives to the Windows ecosystem.</p>]]></content>
    
    
    <summary type="html">Despite being on the market for two years, Windows 11 has failed to gain the same market share as its predecessor. This article takes a closer look at the reasons why Windows 11 has struggled to replace Windows 10, including hardware requirements, the lack of a compelling reason to upgrade, and the potential rise of alternative operating systems such as Linux.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Windows 11" scheme="https://www.nablepart.com/tags/Windows-11/"/>
    
    <category term="Windows 10" scheme="https://www.nablepart.com/tags/Windows-10/"/>
    
  </entry>
  
  <entry>
    <title>Asynchronous Programming in Go - Harnessing the Power of &quot;Future&quot; and &quot;Promise&quot;</title>
    <link href="https://www.nablepart.com/780ab1a4bd7b/"/>
    <id>https://www.nablepart.com/780ab1a4bd7b/</id>
    <published>2023-11-01T13:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>In the realm of modern software development, asynchronous programming has become a common practice to enhance performance and responsiveness. Go language offers various methods for asynchronous programming, among which Futures and Promises stand out as powerful tools. In this comprehensive guide, we will delve deep into the world of asynchronous programming in Go, with a specific focus on utilizing Futures and Promises.</p><h2 id="The-Basics-of-Asynchronous-Programming-in-Go"><a href="#The-Basics-of-Asynchronous-Programming-in-Go" class="headerlink" title="The Basics of Asynchronous Programming in Go"></a>The Basics of Asynchronous Programming in Go</h2><p>Go language utilizes goroutines and channels as the fundamental building blocks for asynchronous programming. However, in more complex scenarios, we may require advanced tools like Futures and Promises. Let’s get acquainted with these concepts.</p><h2 id="Introduction-to-Futures"><a href="#Introduction-to-Futures" class="headerlink" title="Introduction to Futures"></a>Introduction to Futures</h2><p>Futures represent the eventual result of an asynchronous operation. They allow us to handle the outcome of an operation that will be completed at some point in the future. By utilizing Futures, we can write code that continues execution without blocking, making our programs more efficient and responsive.</p><h2 id="Understanding-Promises"><a href="#Understanding-Promises" class="headerlink" title="Understanding Promises"></a>Understanding Promises</h2><p>Promises, on the other hand, serve as the mechanism to set the value of a Future. They allow us to asynchronously assign a value to a Future, which can then be retrieved and utilized by other parts of our program. Promises play a crucial role in Go’s asynchronous programming paradigm.</p><h2 id="Creating-Futures-and-Promises-in-Go"><a href="#Creating-Futures-and-Promises-in-Go" class="headerlink" title="Creating Futures and Promises in Go"></a>Creating Futures and Promises in Go</h2><p>While Go’s standard library does not provide built-in support for Futures and Promises, we can leverage third-party libraries like “go-futures” to create them. Let’s explore how we can create and utilize Futures and Promises in Go.</p><h3 id="Creating-a-Future"><a href="#Creating-a-Future" class="headerlink" title="Creating a Future"></a>Creating a Future</h3><p>To create a Future in Go, we can use the “go-futures” library. This library provides a simple and intuitive API for working with asynchronous operations. By utilizing the “New” function from the “futures” package, we can create a new Future instance.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">future := futures.New()</span><br></pre></td></tr></table></figure><h3 id="Setting-the-Value-of-a-Future"><a href="#Setting-the-Value-of-a-Future" class="headerlink" title="Setting the Value of a Future"></a>Setting the Value of a Future</h3><p>Once we have a Future, we can set its value using a Promise. A Promise is obtained by calling the “Promise” method on a Future instance. By invoking the “SetValue” method on a Promise, we can assign a value to the associated Future.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">promise := future.Promise()</span><br><span class="line">promise.SetValue(<span class="string">&quot;Hello, Future!&quot;</span>)</span><br></pre></td></tr></table></figure><h3 id="Retrieving-the-Value-of-a-Future"><a href="#Retrieving-the-Value-of-a-Future" class="headerlink" title="Retrieving the Value of a Future"></a>Retrieving the Value of a Future</h3><p>To retrieve the value of a Future, we can use the “Get” method. This method returns the value and an error associated with the Future. By calling “Get” on a Future, we can access the result of the asynchronous operation.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">value, err := future.Get()</span><br></pre></td></tr></table></figure><h2 id="Advanced-Applications-of-Futures-and-Promises-in-Go"><a href="#Advanced-Applications-of-Futures-and-Promises-in-Go" class="headerlink" title="Advanced Applications of Futures and Promises in Go"></a>Advanced Applications of Futures and Promises in Go</h2><p>Futures and Promises offer more than just the basics of asynchronous programming. They provide a range of advanced features that can greatly enhance the functionality and robustness of our asynchronous code. Let’s explore some of these advanced applications.</p><h3 id="Chaining-Asynchronous-Operations"><a href="#Chaining-Asynchronous-Operations" class="headerlink" title="Chaining Asynchronous Operations"></a>Chaining Asynchronous Operations</h3><p>In complex applications, we often encounter scenarios where multiple asynchronous operations depend on each other. Futures and Promises provide an elegant way to handle such dependencies through chaining. By utilizing the “Then” method on a Future, we can easily define a chain of asynchronous operations.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">future1 := fetchData(<span class="string">&quot;https://api.example.com/data1&quot;</span>)</span><br><span class="line">future2 := future1.Then(<span class="function"><span class="keyword">func</span><span class="params">(data1 <span class="keyword">interface</span>&#123;&#125;)</span></span> <span class="keyword">interface</span>&#123;&#125; &#123;</span><br><span class="line">  <span class="comment">// Handle data1 and return a new Future</span></span><br><span class="line">  <span class="keyword">return</span> fetchData(<span class="string">&quot;https://api.example.com/data2&quot;</span>) </span><br><span class="line">&#125;)</span><br></pre></td></tr></table></figure><p>In this example, the execution of “future2” depends on the completion of “future1”.</p><h3 id="Error-Handling"><a href="#Error-Handling" class="headerlink" title="Error Handling"></a>Error Handling</h3><p>When dealing with asynchronous operations, it is essential to handle errors effectively. Futures and Promises often provide dedicated methods for error handling. By utilizing these methods, we can gracefully handle errors and take appropriate actions.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">future := fetchData(<span class="string">&quot;https://api.example.com/data&quot;</span>)</span><br><span class="line">future.OnError(<span class="function"><span class="keyword">func</span><span class="params">(err <span class="type">error</span>)</span></span> &#123;</span><br><span class="line">  <span class="comment">// Handle the error  </span></span><br><span class="line">&#125;)</span><br></pre></td></tr></table></figure><h3 id="Controlling-Timeouts"><a href="#Controlling-Timeouts" class="headerlink" title="Controlling Timeouts"></a>Controlling Timeouts</h3><p>Controlling timeouts is crucial when performing asynchronous operations. Futures and Promises can help us implement timeout control effectively. By using the “GetWithTimeout” method on a Future, we can specify a timeout duration and retrieve the result within the specified time limit.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">future := fetchData(<span class="string">&quot;https://api.example.com/data&quot;</span>)</span><br><span class="line">result, err := future.GetWithTimeout(<span class="number">5</span> * time.Second)</span><br></pre></td></tr></table></figure><h3 id="Managing-Concurrent-Operations"><a href="#Managing-Concurrent-Operations" class="headerlink" title="Managing Concurrent Operations"></a>Managing Concurrent Operations</h3><p>Efficiently managing multiple concurrent asynchronous operations can be challenging. However, Futures and Promises provide a straightforward approach to achieve this. By utilizing the “All” method from the “futures” package, we can combine multiple Futures into a single Future that completes when all the dependent Futures are resolved.</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">future1 := fetchData(<span class="string">&quot;https://api.example.com/data1&quot;</span>)</span><br><span class="line">future2 := fetchData(<span class="string">&quot;https://api.example.com/data2&quot;</span>)</span><br><span class="line"></span><br><span class="line">combinedFuture := futures.All(future1, future2)</span><br><span class="line">result, err := combinedFuture.Get() </span><br></pre></td></tr></table></figure><p>In this example, the “combinedFuture” will only complete when both “future1” and “future2” are resolved.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>In this comprehensive exploration of asynchronous programming in Go, we have covered the various aspects of utilizing Futures and Promises. From the basics of creation and usage to advanced applications like chaining asynchronous operations, error handling, timeout control, and managing concurrency, Futures and Promises provide a flexible and powerful toolkit for asynchronous programming in Go. By incorporating these tools into our code, we can improve code structure, maintainability, and effectively handle complex asynchronous logic.</p><p>Futures and Promises play a vital role in Go’s asynchronous programming paradigm, offering structured code and powerful functionality. If you are developing complex asynchronous applications, exploring Futures and Promises is definitely worth considering.</p>]]></content>
    
    
    <summary type="html">In this comprehensive guide, we&#39;ll dive into the asynchronous programming world of the Go language, with a special focus on the use of Futures and Promises.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Futures" scheme="https://www.nablepart.com/tags/Futures/"/>
    
    <category term="Promises" scheme="https://www.nablepart.com/tags/Promises/"/>
    
    <category term="Asynchronous Programming" scheme="https://www.nablepart.com/tags/Asynchronous-Programming/"/>
    
    <category term="Go language" scheme="https://www.nablepart.com/tags/Go-language/"/>
    
  </entry>
  
  <entry>
    <title>Born from grassroots CN APEX dreamers</title>
    <link href="https://www.nablepart.com/d6165950c022/"/>
    <id>https://www.nablepart.com/d6165950c022/</id>
    <published>2023-11-01T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/02/bXxOdeWrkpznKl1.png" alt="image.png"></p><h2 id="Born-from-grassroots-CN-APEX-dreamers-with-what-to-go-to-the-present"><a href="#Born-from-grassroots-CN-APEX-dreamers-with-what-to-go-to-the-present" class="headerlink" title="Born from grassroots CN APEX dreamers, with what to go to the present?"></a>Born from grassroots CN APEX dreamers, with what to go to the present?</h2><p>The crowd at the ALGS Championships, APEX’s highest-level event, chanted these words at the Copper Box Arena in Birmingham, England, on September 9 of this year. A few hours later, it reappeared as a hot search term at the top of the Hot 100.</p><p><img src="https://s2.loli.net/2023/11/02/goCc2bIpW5vKmns.png" alt="image.png"></p><p>The Chinese APEX teams, who were originally not favored, eventually made it to the finals of this ALGS. DF, with their dominating performance on the first day of the main event, advanced to the finals as a dark horse, while the other team, MDY.W, played steadily and also managed to break into the finals to meet DF.</p><p>The only two domestic teams that made history at every step of the way gave domestic players, who only thought that “making it to the finals would be a success”, a real hope for CN APEX to win the championship. It is also because of their excellent play in this tournament that they have successfully made a name for the region, EA announced in early October the year four program, gave CN a direct invitation to the next year’s ALGS.</p><p><img src="https://s2.loli.net/2023/11/02/oEFNkDIbgTZSLUy.png" alt="image.png"></p><h2 id="Welcome-to-the-ALGS-CN-Region"><a href="#Welcome-to-the-ALGS-CN-Region" class="headerlink" title="Welcome to the ALGS, CN Region!"></a>Welcome to the ALGS, CN Region!</h2><p>Luckily, the road to the championship that no one expected at CN APEX was followed and documented by our agency.</p><p>From the player’s home, filled with apprehension and free-flowing thoughts before the tournament, to Birmingham, where they trumpeted with passion and fury after the opening rounds, the two grassroots teams talked about the tournament and the teams, as well as quite a few of their own stories. The team that took the plunge and joined a team with no pay, no sponsorship and no club, or the team that put aside their more lucrative full-time anchoring jobs for a while, or the team that danced from stage to stage.</p><p><img src="https://s2.loli.net/2023/11/02/W79vfVsRCH3N85x.png" alt="image.png"></p><p>But the only constant is that all these CN APEX players who come from different places and head side by side in the same direction look like the protagonist of a dream MDY.W’s old hangman once had:</p><p>Before the game, he had dreamed that in the key game, they won the championship with the last hand of chicken. He just kept crying next to the sea of people and didn’t say anything. Then he woke up after realizing it was a dream, but couldn’t stop crying - because whether it was a dream or reality, this was what they had been after all along.</p><p>As the new season of APEX is about to launch, this video will bring the story of this group of dream chasers. About their on-stage and behind-the-scenes, about their growth, and about the dreams they still continue to have.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Born from grassroots CN APEX dreamers, with what to go to the present?</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="CN APEX" scheme="https://www.nablepart.com/tags/CN-APEX/"/>
    
    <category term="APEX" scheme="https://www.nablepart.com/tags/APEX/"/>
    
  </entry>
  
  <entry>
    <title>The Rise of Chinese Lingerie Brands in Manhattan, New York City</title>
    <link href="https://www.nablepart.com/cf474b701a1c/"/>
    <id>https://www.nablepart.com/cf474b701a1c/</id>
    <published>2023-11-01T01:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>In the bustling streets of Manhattan, amidst the myriad of fashion boutiques and trendy stores, a unique cultural fusion is taking place. Chinese lingerie brands are making their mark, offering a fresh perspective on femininity, individuality, and sexuality. These brands are redefining the lingerie industry with their exquisite designs, sustainable practices, and celebration of Chinese aesthetics. Join us as we explore the rise of Chinese lingerie brands in Manhattan and delve into the stories behind these innovative and empowering labels.</p></blockquote><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231101131424.png"></p><h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>In recent years, Chinese lingerie brands have gained significant attention in the fashion world. Their unique blend of Chinese aesthetics, modern style, and sustainable practices have captivated consumers globally. Now, these innovative brands are making their way to the heart of Manhattan, New York City. Here, in the fashion capital of the world, they are redefining the lingerie industry and challenging traditional perceptions of femininity and sexuality.</p><h2 id="The-Origins-of-Chinese-Lingerie-Brands-in-Manhattan"><a href="#The-Origins-of-Chinese-Lingerie-Brands-in-Manhattan" class="headerlink" title="The Origins of Chinese Lingerie Brands in Manhattan"></a>The Origins of Chinese Lingerie Brands in Manhattan</h2><p>The journey of Chinese lingerie brands in Manhattan began with the iconic White Rabbit candy. Originally featuring the “ABC Mickey Mouse” mascot, the candy factory later changed its branding to a jumping white rabbit logo. This transition occurred after the factory’s nationalization, where the use of western imagery was deemed politically problematic. </p><p>Chop Suey Club, a Chinese art and design store based in Chinatown, Manhattan, took inspiration from Edward Hopper’s famous painting, “Chop Suey.” This painting, sold for a record-breaking $92 million in 2018, portrays two women dining alone, symbolizing the changes in American society during the Roaring Twenties.</p><h2 id="Pillowbook-Redefining-Chinese-Style"><a href="#Pillowbook-Redefining-Chinese-Style" class="headerlink" title="Pillowbook: Redefining Chinese Style"></a>Pillowbook: Redefining Chinese Style</h2><p>One of the prominent Chinese lingerie brands making waves in Manhattan is Pillowbook. Founded by Irene Lu, a Beijing-born designer, Pillowbook aims to reinvent Chinese aesthetics and accentuate the beauty of the Asian physique. Lu believes that Asian girls are not given enough credit for their sexiness, and Pillowbook seeks to change that perception.</p><h3 id="The-Power-of-Dudou"><a href="#The-Power-of-Dudou" class="headerlink" title="The Power of Dudou"></a>The Power of Dudou</h3><p>At the heart of Pillowbook’s success lies the dudou, a classic Chinese intimate wear. Lu’s special lingerie design, the dudou, has been a bestseller since the brand’s inception. This traditional Chinese garment is brought to life with a modern twist, combining luxurious silk and delicate embroidery. Each piece is meticulously handmade, showcasing the brand’s commitment to detail and craftsmanship.</p><h3 id="Modern-Chinese-Style-Revived"><a href="#Modern-Chinese-Style-Revived" class="headerlink" title="Modern Chinese Style Revived"></a>Modern Chinese Style Revived</h3><p>Pillowbook embodies modern Chinese style, embracing a strong femininity that comes in every shape, size, and form. The brand’s lingerie designs celebrate individuality and empower women to feel confident and sexy. Through its unique blend of traditional Chinese elements and contemporary fashion, Pillowbook has become a symbol of the evolving perceptions of beauty and style in China.</p><h2 id="4-The-End-Sustainable-and-Edgy"><a href="#4-The-End-Sustainable-and-Edgy" class="headerlink" title="4. The End: Sustainable and Edgy"></a>4. The End: Sustainable and Edgy</h2><p>Another Chinese lingerie brand making a splash in Manhattan is The End. Designed by Taiwanese model Beikuo, who graduated from Parsons New School Of Design, The End brings a fresh perspective to sustainable fashion. The brand exclusively uses organic cotton, prioritizing environmental consciousness without compromising style. </p><h3 id="Quirk-and-Kink-in-Sustainable-Fashion"><a href="#Quirk-and-Kink-in-Sustainable-Fashion" class="headerlink" title="Quirk and Kink in Sustainable Fashion"></a>Quirk and Kink in Sustainable Fashion</h3><p>The End’s designs exude a sense of edginess and playfulness. With a focus on sustainability, the brand combines innovative designs with quirky details, catering to those who embrace their quirks and fetishes. The End challenges societal norms and encourages self-expression through fashion, embracing the beauty of imperfections.</p><h3 id="Embracing-Imperfections"><a href="#Embracing-Imperfections" class="headerlink" title="Embracing Imperfections"></a>Embracing Imperfections</h3><p>In the world of lingerie, The End believes that imperfections are what make us human. The brand celebrates uniqueness and encourages individuals to embrace their flaws and quirks. By showcasing lingerie that goes beyond traditional notions of perfection, The End aims to redefine beauty standards and promote self-acceptance.</p><h2 id="Provocative-Perceptions-Changing-Attitudes-towards-Lingerie"><a href="#Provocative-Perceptions-Changing-Attitudes-towards-Lingerie" class="headerlink" title="Provocative Perceptions: Changing Attitudes towards Lingerie"></a>Provocative Perceptions: Changing Attitudes towards Lingerie</h2><p>The rise of Chinese lingerie brands in Manhattan reflects a broader shift in societal attitudes towards lingerie and sexuality. In the past, instances of “indecent exposure” would have been deemed inappropriate. However, the rise of feminism in the 1920s led to gradual changes in these perceptions. Restaurants began posting signs that read “Tables for Ladies,” signaling a shift towards a more inclusive and accepting society.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231101131905.png"></p><h2 id="Breaking-Gender-Norms-Unisex-Lingerie-Lines"><a href="#Breaking-Gender-Norms-Unisex-Lingerie-Lines" class="headerlink" title="Breaking Gender Norms: Unisex Lingerie Lines"></a>Breaking Gender Norms: Unisex Lingerie Lines</h2><p>Chinese lingerie brands in Manhattan are breaking down gender norms and embracing gender fluidity. It is no longer just women who wear lingerie; men are also embracing these intimate pieces. The unisex lingerie lines offered by Chinese brands challenge traditional notions of femininity and masculinity, promoting inclusivity and self-expression for all individuals.</p><h2 id="The-Influence-of-Chinese-Culture-on-Lingerie-Design"><a href="#The-Influence-of-Chinese-Culture-on-Lingerie-Design" class="headerlink" title="The Influence of Chinese Culture on Lingerie Design"></a>The Influence of Chinese Culture on Lingerie Design</h2><p>Chinese lingerie brands draw inspiration from their rich cultural heritage, infusing traditional Chinese elements into their designs. Symbolism and embroidery play a significant role in creating unique and visually stunning lingerie pieces. Luxurious silk and delicate details showcase the beauty and elegance of Chinese aesthetics.</p><h3 id="Symbolism-and-Embroidery"><a href="#Symbolism-and-Embroidery" class="headerlink" title="Symbolism and Embroidery"></a>Symbolism and Embroidery</h3><p>Chinese culture is steeped in symbolism, and this is reflected in the intricate embroidery found in Chinese lingerie designs. Each stitch tells a story, conveying meanings of luck, prosperity, and love. From delicate floral patterns to intricate dragon motifs, these designs pay homage to Chinese traditions and craftsmanship.</p><h3 id="Luxurious-Silk-and-Delicate-Details"><a href="#Luxurious-Silk-and-Delicate-Details" class="headerlink" title="Luxurious Silk and Delicate Details"></a>Luxurious Silk and Delicate Details</h3><p>Silk has long been associated with luxury and elegance in Chinese culture. Chinese lingerie brands embrace this tradition, using luxurious silk fabrics to create sensual and comfortable pieces. Delicate lace, intricate cutouts, and embellishments further enhance the beauty and allure of their designs, ensuring that each piece is a work of art in its own right.</p><h2 id="The-Impact-of-Chinese-Lingerie-Brands-in-Manhattan"><a href="#The-Impact-of-Chinese-Lingerie-Brands-in-Manhattan" class="headerlink" title="The Impact of Chinese Lingerie Brands in Manhattan"></a>The Impact of Chinese Lingerie Brands in Manhattan</h2><p>Chinese lingerie brands have made a significant impact in Manhattan, empowering women and shaping fashion trends. By celebrating individuality and promoting body positivity, these brands have challenged traditional beauty standards and offered a fresh perspective on lingerie.</p><h3 id="Empowering-Women"><a href="#Empowering-Women" class="headerlink" title="Empowering Women"></a>Empowering Women</h3><p>Chinese lingerie brands have played a crucial role in empowering women to embrace their bodies and feel confident in their skin. Their inclusive designs cater to women of all shapes, sizes, and backgrounds, promoting a sense of self-acceptance and body positivity. By embracing Chinese aesthetics and blending them with modern fashion, these brands have given women a platform to express their individuality and celebrate their sensuality.</p><h3 id="Shaping-Fashion-Trends"><a href="#Shaping-Fashion-Trends" class="headerlink" title="Shaping Fashion Trends"></a>Shaping Fashion Trends</h3><p>The rise of Chinese lingerie brands in Manhattan has not gone unnoticed by the fashion industry. Their unique designs and innovative approaches to sustainability have captured the attention of fashion enthusiasts and industry insiders alike. These brands have become trendsetters, influencing the way lingerie is perceived and challenging traditional notions of what is considered fashionable. </p><h2 id="The-Future-of-Chinese-Lingerie-in-Manhattan"><a href="#The-Future-of-Chinese-Lingerie-in-Manhattan" class="headerlink" title="The Future of Chinese Lingerie in Manhattan"></a>The Future of Chinese Lingerie in Manhattan</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231101132119.png"></p><p>The future looks promising for Chinese lingerie brands in Manhattan. As the demand for ethically produced and inclusive lingerie continues to grow, these brands are well-positioned to meet the needs of modern consumers. With their unique blend of Chinese aesthetics, sustainable practices, and celebration of individuality, Chinese lingerie brands are poised to make a lasting impact on the lingerie industry in Manhattan and beyond.</p><h3 id="The-Growing-Demand"><a href="#The-Growing-Demand" class="headerlink" title="The Growing Demand"></a>The Growing Demand</h3><p>As consumers become more conscious of the environmental and social impact of their purchases, the demand for sustainable and ethical lingerie is on the rise. Chinese lingerie brands, with their commitment to organic materials and transparent production processes, are well-suited to meet this demand. By offering high-quality lingerie that aligns with consumers’ values, these brands are carving out a niche in the market and attracting a loyal customer base. </p><h3 id="Expanding-Market-Reach"><a href="#Expanding-Market-Reach" class="headerlink" title="Expanding Market Reach"></a>Expanding Market Reach</h3><p>Chinese lingerie brands in Manhattan are not only catering to the local market but also expanding their reach globally. With the rise of e-commerce and social media, these brands have the opportunity to showcase their designs to a worldwide audience. By leveraging digital platforms and strategic partnerships, Chinese lingerie brands can reach customers far beyond the streets of Manhattan, establishing themselves as key players in the global lingerie industry.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>The rise of Chinese lingerie brands in Manhattan signifies a shift in the fashion landscape, where cultural diversity and inclusivity are celebrated. By combining Chinese aesthetics, sustainable practices, and a celebration of individuality, these brands are redefining the lingerie industry and challenging traditional beauty standards. As they continue to make their mark in Manhattan and beyond, Chinese lingerie brands are empowering women, shaping fashion trends, and inspiring a new generation of fashion enthusiasts. With their unique blend of tradition and innovation, Chinese lingerie brands are here to stay.  </p><p>Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any agency or company.</p>]]></content>
    
    
    <summary type="html">Chinese lingerie brands are coming into their own, interpreting femininity, individuality and sexuality in a new light. Let&#39;s explore the rise of Chinese lingerie brands in Manhattan and delve into the stories behind these innovative and empowering brands.</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="Lingerie Brands" scheme="https://www.nablepart.com/tags/Lingerie-Brands/"/>
    
    <category term="cultural fusion" scheme="https://www.nablepart.com/tags/cultural-fusion/"/>
    
    <category term="fashions" scheme="https://www.nablepart.com/tags/fashions/"/>
    
    <category term="China Lingerie" scheme="https://www.nablepart.com/tags/China-Lingerie/"/>
    
  </entry>
  
  <entry>
    <title>In Pandora&#39;s Frontier, Ubisoft wants to be the &quot;Cameron&quot; of the gaming world</title>
    <link href="https://www.nablepart.com/3e2c16be4b17/"/>
    <id>https://www.nablepart.com/3e2c16be4b17/</id>
    <published>2023-10-31T13:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/31/XZrWtu5JQ2d9RFG.png" alt="image.png"></p><h2 id="In-Pandora’s-Frontier-Ubisoft-wants-to-be-the-“Cameron”-of-the-gaming-world"><a href="#In-Pandora’s-Frontier-Ubisoft-wants-to-be-the-“Cameron”-of-the-gaming-world" class="headerlink" title="In Pandora’s Frontier, Ubisoft wants to be the “Cameron” of the gaming world"></a>In Pandora’s Frontier, Ubisoft wants to be the “Cameron” of the gaming world</h2><p>Burrowing out of the cave, plucking banana leaves and climbing up the waterfall and vines, the bouncy giant purple flower bounces me into the air. The flying arrow in my hand shoots out and penetrates the peculiar plant shaped like a pumpkin, which spits out the long stem in its belly and creates a treacherous and reluctant pathway up the wall. The lilting melody of harp flutes and African drums echoed in my ears, and behind me was a stunning view of lush greenery and unobstructed views.</p><p>But I can’t take it all in - it’s 10,000 feet in the air, with mines of all sizes floating in the air due to superconductivity, connected only by roots and vines, and with every step, the mental tension of fear of heights deepens. It was then that I remembered the chief’s last words of advice before we set out:</p><p>“My dear, we hunters always say, “Don’t look down,” and you’d do well to remember the lessons of your predecessors.”</p><p>In the 2009 Avatar movie, Jack, a human spy who becomes an avatar, climbs up to the Ikran habitat on Pandora in order to get his own “pterodactyl”. It’s the climactic moment of the movie, with boulders floating in the clouds; Ikaram weaving through waterfalls; and the protagonist, palms sweating, climbing up a vine at a height of 10,000 feet with no place to land. And 14 years later, I experienced a similar process immersed in a first-person perspective in the medium of gaming.</p><p>And this time, it struck me as deeply as the movie did back then.</p><p><img src="https://s2.loli.net/2023/10/31/RSoIyCgnc1AebKx.png" alt="image.png"></p><h2 id="The-first-PV-for-Avatar-Pandora’s-Frontier"><a href="#The-first-PV-for-Avatar-Pandora’s-Frontier" class="headerlink" title="The first PV for Avatar: Pandora’s Frontier"></a>The first PV for Avatar: Pandora’s Frontier</h2><p>Two years ago, Ubisoft released the first PV for Avatar: Pandora’s Frontier at E3. it was the grand finale of Ubisoft’s conference that year and was announced by the company’s founder, Yves Guillemot, himself.</p><p>The most impressive thing about the PV is the “graphic upgrade” that Yves Guillemot emphasized, and every frame in the PV is comparable to CG, with the daylight at dusk hitting every inch of Pandora’s new vegetation in an extremely natural way. In the natural and harmonious rainforest of Pandora, every flower, tree, shrub, and grass is swaying in the wind as if it were conscious. The art resources in the entire PV are incredibly high-definition and intensive.</p><p>On B-site, this teaser trailer has received millions of plays, with more skepticism in the pop-ups than praise and amazement. As Ubisoft’s last generation has a lot of “shrinking history”, this trailer is also reasonably suspected by a large number of players as “false propaganda”. Pandora’s dense vegetation, in particular, looks like a movie effect. However, common sense would tell us that the effect is not as bad as the movie effects.</p><p>However, common sense tells us that it is virtually impossible to achieve movie-quality real-time rendering in the game with common civilian-grade graphics cards.</p><p><img src="https://s2.loli.net/2023/10/31/M1SaEJXIe3ULCQq.png" alt="image.png"></p><ul><li>The first PV of what has been described as “false advertising”</li></ul><p>In most cases, the use of “paper grass”, “paper leaves”, supplemented by high contrast lighting, to reduce the physical interaction between the environment and the player, and the addition of brightly colored filters, to create a seemingly “natural” atmosphere, is the only way to achieve a “natural” atmosphere. “This is a common pattern in modern game design. Whether it’s Soul of Tsushima, which is known for its atmosphere, the technically top-notch Ultimate Horizon 5, or the medieval wilderness of Skyrim: Salvation, which is close to reality, none of them have gone beyond this framework. I remember telling my coworkers who were up all night with me, with the confidence of common sense, “It’s definitely CG!</p><p>If I hadn’t tried Pandora’s Frontier myself, I’d probably still think that today.</p><p><img src="https://s2.loli.net/2023/10/31/QH2jIWGtbFvdupg.png" alt="image.png"></p><ul><li>The PV demo’s vegetation effects are “fake at first glance”</li></ul><p>But the Ubisoft CEO wasn’t lying. The demo of Pandora’s Frontier has been scaled back a bit in terms of special effects and mechanical modeling compared to the questionable “in-engine” demo from two years ago. Other than that, the rest of the game is almost identical to the PV, with the density of the environments in the demo remaining intact, and individually modeled natural vegetation in a variety of colors as far as the eye can see. The environments are richly layered, from moss and grass at the bottom, to shrubs and vines on top, to sycamores, trees, and even giant crowns, all of which blend together in a way that is at the level of many “engine showcase demos”.</p><p>Importantly, the texture of the whole “nature” is also largely due to the interactivity of the ecology.</p><p>Much like the Avatar movies, the Na’vi, who live in harmony with the planet’s mother goddess, Ava, have always insisted on “taking what comes from nature”. Fruits in the game can be harvested, and depending on the time of year and the skill level of the picker, the harvested fruits will have very different qualities. Stiff stalks of grass can be used to make arrows, and burnt bird eggs can increase your movement speed. Breaking a pumpkin-like barnacle will provide a climbing rope, and inhaling a special pollen will increase your blood level for a long time. While it’s not quite Star Citizen-level “everything is animated,” there are far more interactive scripts than in previous Ubisoft games.</p><p>The content is still limited to visual effects, but the immersion brought about by the “technological explosion” that took place as I delved into the Pandora rainforest and pressed up against countless patches of thick, textured, interactive, physically-detailed grasses and stalks made a qualitative difference to the experience - everywhere I went, I was in the middle of the action, and everywhere I went, I was in the middle of the action. -Wherever I go, I am in the world of the movie.</p><h2 id="Explore-the-world-fight-in-strongholds"><a href="#Explore-the-world-fight-in-strongholds" class="headerlink" title="Explore the world, fight in strongholds"></a>Explore the world, fight in strongholds</h2><p>In contrast to the graphical breakthroughs, Pandora’s Frontier’s gameplay framework remains true to Ubisoft’s “traditional model”. Explore the world, fight in strongholds, and complete one main and side mission after another. Open the map and the mission menu, arrive at the location and start watching the movie.</p><p>The plot is also quite traditional, with the player being a native Na’vi, who for some reason was taken away from the human race at a young age and raised away from his tribe. Although he has established a deep connection with the human world, as he grows older, he becomes more aware of his own race and the human invasion of Pandora. Eventually, he decides to rejoin his tribe and lead them in a revolt.</p><p>That’s right, the game opted for an original plot, with a timeline between Avatar 1 and 2, and since the game is in first-person throughout, combined with the traditional plot of fighting strongholds and “rebelling against tyranny,” it’s very reminiscent of the Crysis series.</p><p><img src="https://s2.loli.net/2023/10/31/C78nHIXpguMshdN.png" alt="image.png"></p><ul><li>Fighting strongholds, familiar?</li></ul><p>But as mentioned before, even if everything is familiar, most of the game’s formulaic content is “lubricated” by detailed animations, rich dialog, and significantly improved motion capture performances, thanks to the sheer number of scripts and art assets.</p><p>Approach an ordinary stronghold at random and you’ll receive a “dialog narration” that fits the current plot. Passing by a plane crash site, the system will remind you of a side quest that requires you to use “connect-the-dots” to piece together objects scattered in the environment, deduce the truth of the accident, and complete an “environmental narrative”.</p><p>From a practical standpoint, this “piling up” of Ubisoft’s “canned” garbage time still didn’t prevent me from fast-forwarding through the side conversations, but at least it made me happy to stop and finish one of the “canned” dialogs on my way to the main event. “Canned” quests, checking out the details of the world.</p><p>Obviously, Ubisoft is aware of the fact that contemporary gamers are tired of “canned” games, so they came up with the ultimate idea of adding more flavor to the can. And after tasting it, I’ll admit it works.</p><h2 id="The-Living-World"><a href="#The-Living-World" class="headerlink" title="The Living World"></a>The Living World</h2><p>At the end of the demo, I asked Pandora’s Frontier project producer Alain Gurniki some questions during the media interview session. In his answers about the gameplay, the producer informed me that the most central creative concept of the entire project is - “The Living World”.</p><p>Alain Gurniki told me that Pandora is a breathing, conscious planet, and that the Na’vi, as the “chief of all spirits”, interact with it all the time. So in terms of gameplay, they needed to present a realistic, detailed, and at the same time complete ecological world, which is unprecedented in the whole game field.</p><p>In my two-hour demo, I saw carnivorous wolves hunting in the wild, grass and trees shrinking as the protagonist moved closer and closer, and my own Ikaran creeping through the trees of his home and playing with the birds around him. I don’t know if this is what Alain Gurniki would call an “eco-world,” but even if it’s not “unprecedented,” it’s at least refreshing and deepens my impression of the world over and over again.</p><p>Since Pandora’s Frontier was developed in real time and reviewed by Disney and Cameron’s Lightstorm team, the game’s vegetation and ecological effects all follow the basics of Pandora’s planet. In the technical realization of the link, also need to go through the Cameron style “fault-finding” and polishing and upgrading.</p><p>For example, the upgraded “Snow Lotus Engine” can achieve geometric high-density optimization of art assets. Dynamic weather brings wind physics, water physics, temperature and humidity of the water apparatus can be visualized, Pandora’s unique vegetation fluorescence can rise and fall in brightness according to the time of day, and even the pupil scaling of living creatures can be simulated in real time.</p><p>As I talked to the producers about these almost “black-tech” effects, I recalled over and over again how amazing I felt when I saw Avatar for the first time in 2009.</p><p>At that time, Cameron was already one of the most prestigious directors in the movie industry, and he was obsessed with special effects technology in the movie field. The hope was to make a never-before-seen sci-fi movie, Avatar, using 3D and CGI technology throughout.</p><p>Before the release of Avatar, 3D movies have appeared, but due to the cost of production, the degree of technical realization and other issues, resulting in the use of technology in the finished film is mostly rough, vertigo, nausea and headaches are also difficult to resolve. Hollywood movie industry will be regarded as a failure of the technology chicken ribs, snowed for many years, “Avatar” also naturally become unpopular against the tide of work.</p><p>Then Avatar hit the box office myth, raking in $2.9 billion. It became so famous worldwide that in my small, third-tier town, the only movie theater in town had a full month’s worth of screenings. Avatar was also the first 3D movie I ever saw in my life.</p><p><img src="https://s2.loli.net/2023/10/31/JydWu7XIjNRnkEp.png" alt="image.png"></p><ul><li>Avatar was not well received by the industry.</li></ul><p>So in my memory, 3D was not just a failed technological cock-up, but a huge step forward in the history of cinematic art. Cameron gave a young amateur movie fan and a seasoned and experienced Hollywood producer completely opposite perceptions, and proved the former right in the future.</p><p>With the framework of the gameplay unchanged, what exactly is the point of making the florals clear? If you were reading the previous content and had the same questions that I had when I first got into the game? Then let’s go back to the beginning of the article.</p><p>As I traveled through the jungle, plucking leaves and vines, I climbed to the highest point, and at the end of the quest, I tamed the Ikaran that belonged to me, naming him “STROM”. A hunter-guide pushed me off a cliff, called out its name as I plummeted, then swooped and soared to the sound of rousing vocals, riding the winds and enjoying the lush beauty of the world 10,000 meters away. The emotional experience, like the climax of the ‘09 movie, scurries up above the clouds where the Ikaran shuttle soars, and perhaps the answer is found.</p><p><strong>Maybe this time, Ubisoft wants to be the Cameron of gaming.</strong></p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">In Pandora&#39;s Frontier, Ubisoft wants to be the &quot;Cameron&quot; of the gaming world</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    
    <category term="Pandora&#39;s Frontier" scheme="https://www.nablepart.com/tags/Pandora-s-Frontier/"/>
    
    <category term="Ubisoft" scheme="https://www.nablepart.com/tags/Ubisoft/"/>
    
    <category term="Cameron" scheme="https://www.nablepart.com/tags/Cameron/"/>
    
    <category term="Avatar:Pandora&#39;s Frontier" scheme="https://www.nablepart.com/tags/Avatar-Pandora-s-Frontier/"/>
    
  </entry>
  
  <entry>
    <title>How to Ensure Stable Service Call Availability in Java Projects</title>
    <link href="https://www.nablepart.com/9b601d13ba9c/"/>
    <id>https://www.nablepart.com/9b601d13ba9c/</id>
    <published>2023-10-31T12:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>In Java projects, it is common to have service calls between different components. However, if these calls experience timeouts or if the connection pool configuration is not optimal, it can lead to service unavailability. In this article, we will address these issues and provide solutions to ensure stable and available service calls.</p></blockquote><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031145610.png"></p><h2 id="Preventing-Service-Unavailability-Due-to-Call-Timeouts"><a href="#Preventing-Service-Unavailability-Due-to-Call-Timeouts" class="headerlink" title="Preventing Service Unavailability Due to Call Timeouts"></a>Preventing Service Unavailability Due to Call Timeouts</h2><p>When service calls between components timeout, it can disrupt the normal flow of requests and impact the overall stability of the system. Here are some common solutions to address this issue:</p><h3 id="Optimizing-Network-Latency"><a href="#Optimizing-Network-Latency" class="headerlink" title="Optimizing Network Latency"></a>Optimizing Network Latency</h3><p>To improve the performance of service calls, it is essential to evaluate the network environment and optimize the network connections between services. Consider the following measures:</p><ul><li><p>Use high-speed and stable network connections such as Gigabit Ethernet or fiber optics.</p></li><li><p>Minimize the number of network hops to reduce network latency.</p></li><li><p>Utilize Content Delivery Networks (CDNs) to accelerate data transmission for specific network calls.</p></li></ul><h3 id="Setting-Reasonable-Call-Timeout"><a href="#Setting-Reasonable-Call-Timeout" class="headerlink" title="Setting Reasonable Call Timeout"></a>Setting Reasonable Call Timeout</h3><p>To avoid excessive request backlogs or frequent timeout errors, it is crucial to set appropriate call timeout values based on business requirements and network conditions. Configure the timeout duration either through configuration files or programmatically, and log timeout information for future optimization.</p><h3 id="Leveraging-Asynchronous-and-Parallel-Calls"><a href="#Leveraging-Asynchronous-and-Parallel-Calls" class="headerlink" title="Leveraging Asynchronous and Parallel Calls"></a>Leveraging Asynchronous and Parallel Calls</h3><p>For non-real-time dependent calls, consider using asynchronous or parallel call techniques to enhance system throughput and responsiveness. By executing time-consuming calls in the background using techniques like multi-threading or distributed task scheduling, you can prevent blocking the main thread.</p><h2 id="Resolving-Service-Unavailability-Caused-by-Connection-Pool-Configuration"><a href="#Resolving-Service-Unavailability-Caused-by-Connection-Pool-Configuration" class="headerlink" title="Resolving Service Unavailability Caused by Connection Pool Configuration"></a>Resolving Service Unavailability Caused by Connection Pool Configuration</h2><p>Connection pooling is a critical component for managing connection resources between services. Improper configuration of the connection pool can deplete resources and lead to service unavailability. Here are some solutions to address this issue:</p><h3 id="Optimal-Connection-Pool-Capacity"><a href="#Optimal-Connection-Pool-Capacity" class="headerlink" title="Optimal Connection Pool Capacity"></a>Optimal Connection Pool Capacity</h3><p>Set the maximum connection capacity of the connection pool based on actual requirements and service load. An inadequate pool capacity can result in resource shortages, while an excessive capacity can consume unnecessary system resources.</p><h3 id="Configuring-Connection-Timeout"><a href="#Configuring-Connection-Timeout" class="headerlink" title="Configuring Connection Timeout"></a>Configuring Connection Timeout</h3><p>To prevent long-term occupation of connection resources, configure a connection timeout for the connection pool. When the set time elapses, the connection pool automatically recovers idle connections, ensuring that subsequent requests can obtain available connections.</p><h3 id="Monitoring-Connection-Pool-Status"><a href="#Monitoring-Connection-Pool-Status" class="headerlink" title="Monitoring Connection Pool Status"></a>Monitoring Connection Pool Status</h3><p>Regularly monitor the status of the connection pool, including connection count, idle connections, and active connections. Monitoring allows for timely detection of resource constraints and facilitates necessary scaling or optimization.</p><h3 id="Connection-Pool-Cleaning-and-Recycling-Mechanism"><a href="#Connection-Pool-Cleaning-and-Recycling-Mechanism" class="headerlink" title="Connection Pool Cleaning and Recycling Mechanism"></a>Connection Pool Cleaning and Recycling Mechanism</h3><p>Implement a periodic cleaning and recycling mechanism to release long-unused connections in the connection pool. This helps reduce unnecessary resource occupation and improves the availability of the connection pool.</p><p>By implementing the aforementioned solutions, you can enhance system stability and availability by addressing call timeouts and connection pool configuration issues. Optimizing network latency, setting reasonable call timeouts, configuring connection pool capacity, and monitoring connection pool status are effective measures to mitigate service unavailability risks and provide a seamless user experience.</p><p>It is also important to continuously monitor and adjust these configurations to maintain service availability, especially during fluctuating system loads or changes in network conditions.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>To ensure stable and available service calls in Java projects, it is crucial to address call timeouts and connection pool configuration issues. By optimizing network latency, setting appropriate call timeouts, configuring connection pool capacity, and monitoring connection pool status, you can minimize service unavailability risks and provide a reliable user experience.</p><p>Remember to regularly review and adjust these configurations to adapt to changing system demands and network conditions. By implementing these solutions, you can enhance the stability and availability of your Java projects, making them more resilient and efficient.</p>]]></content>
    
    
    <summary type="html">In Java projects, service calls between different components are very common, and there are some problems that may cause the service to be unavailable. This article discusses some common problems and their solutions.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Java" scheme="https://www.nablepart.com/tags/Java/"/>
    
    <category term="service call" scheme="https://www.nablepart.com/tags/service-call/"/>
    
  </entry>
  
  <entry>
    <title>Let&#39;s Talk about Java Threads and CPU Scheduling!</title>
    <link href="https://www.nablepart.com/c4802718736d/"/>
    <id>https://www.nablepart.com/c4802718736d/</id>
    <published>2023-10-31T07:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>In modern operating systems, when a program is run, a process is created. For example, when we start a Java program, the system creates a Java process. Within a process, multiple threads can be created, each having its own set of attributes such as a counter, stack, and local variables. Introducing the concept of threads allows for the separation of resource allocation and execution scheduling within a process. Threads can also access shared memory variables, such as memory addresses and file I&#x2F;O. Threads are lightweight units of execution in a computer system and serve as the smallest unit for system scheduling. In Java, a program starts executing from the main() method, and it may appear that no other threads are involved. However, in reality, a Java program is inherently a multithreaded program because the execution of the main() method is carried out by a thread called the “main” thread. Let’s take a closer look at the threads involved in a typical Java program.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title class_">Thread</span> &#123;</span><br><span class="line">    <span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title function_">main</span><span class="params">(String[] args)</span> &#123;</span><br><span class="line">        <span class="comment">// 获取Java线程管理</span></span><br><span class="line">        <span class="type">ThreadMXBean</span> <span class="variable">threadMXBean</span> <span class="operator">=</span> ManagementFactory.getThreadMXBean();</span><br><span class="line">        <span class="comment">// 仅获取线程和线程堆栈信息</span></span><br><span class="line">        ThreadInfo[] threadInfos = threadMXBean.dumpAllThreads(<span class="literal">false</span>, <span class="literal">false</span>);</span><br><span class="line">        <span class="comment">// 遍历线程信息并打印线程ID和名称</span></span><br><span class="line">        <span class="keyword">for</span> (ThreadInfo threadInfo : threadInfos) &#123;</span><br><span class="line">            System.out.println(<span class="string">&quot;[&quot;</span> + threadInfo.getThreadId() + <span class="string">&quot;]&quot;</span> + threadInfo.getThreadName());</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031201746.png"></p><p>The above code snippet demonstrates that the execution of a Java program involves not only the main() method but also multiple other threads running simultaneously.</p><h2 id="Thread-Implementation"><a href="#Thread-Implementation" class="headerlink" title="Thread Implementation"></a>Thread Implementation</h2><p>Mainstream operating systems provide various ways to implement threads. In Java, each instance of the java.lang.Thread class, which has been started but not yet terminated, represents a thread. The Thread class in the JDK has some notable differences compared to other Java APIs as its key methods are declared as Native. In Java API, the use of native methods usually implies that the method is not implemented using platform-independent means, indicating that it operates at a low-level beyond the scope of the Java language.</p><p>There are primarily three ways to implement threads: using kernel threads (1:1 implementation), using user threads (N:1 implementation), and using a hybrid implementation of user threads and lightweight processes (N:M implementation).</p><h3 id="Kernel-Threads-1-1-Implementation"><a href="#Kernel-Threads-1-1-Implementation" class="headerlink" title="Kernel Threads (1:1 Implementation)"></a>Kernel Threads (1:1 Implementation)</h3><p>Kernel threads (KLTs) are directly supported by the operating system kernel. The kernel manipulates the scheduler to schedule threads and maps their tasks to different processors. Each KLT corresponds to a lightweight process (LWP) as depicted in the image below. Each kernel thread can be seen as a “clone” of the kernel, enabling the operating system to handle multiple tasks simultaneously and support multithreading. Generally, programs do not directly use kernel threads but rather a higher-level interface called lightweight processes (LWPs). LWPs are what we commonly refer to as threads since each LWP is supported by a kernel thread. LWPs and kernel threads have a one-to-one relationship, known as a 1:1 thread model. With the support of kernel threads, each LWP becomes an independent scheduling unit, even if one LWP is blocked in a system call, it does not affect the overall functioning of the process. However, LWP also has some limitations. Since it is implemented based on kernel threads, various thread operations such as creation, destruction, and synchronization require system calls. System calls incur a relatively high cost due to the need for switching between user mode and kernel mode. Additionally, each LWP requires the support of a kernel thread, which consumes certain kernel resources, such as the stack space of the kernel thread. Therefore, the number of LWPs supported by a system is limited.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031191743.png"></p><center>Schematic of 1:1 between lightweight processes and kernel threads</center><h3 id="User-Threads-N-1-Implementation"><a href="#User-Threads-N-1-Implementation" class="headerlink" title="User Threads (N:1 Implementation)"></a>User Threads (N:1 Implementation)</h3><p>User threads are threads that are completely implemented in user space and are invisible to the operating system kernel. Each user thread corresponds to a single kernel thread. The creation, synchronization, destruction, and scheduling of user threads are all performed in user space without relying on the kernel. If implemented properly, these threads can avoid switching to kernel mode, resulting in fast and low-cost operations. They can also support a larger number of threads, which is why they are often used in high-performance databases and similar scenarios. The relationship between processes and user threads follows a many-to-one thread model. The advantage of using user threads is that they do not require support from the operating system kernel. However, the disadvantage is that they also lack support from the kernel, meaning that all thread operations need to be handled by the user program itself. This includes thread creation, switching, and scheduling. If one thread is blocked, it may cause the entire process to be blocked. Java previously used user threads, but eventually abandoned them. However, newer programming languages such as Golang and Erlang, which focus on high concurrency, widely support user threads.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031192424.png"></p><center>Schematic of the N:1 relationship between processes and user threads</center><h3 id="Hybrid-Implementation-N-M-Implementation"><a href="#Hybrid-Implementation-N-M-Implementation" class="headerlink" title="Hybrid Implementation (N:M Implementation)"></a>Hybrid Implementation (N:M Implementation)</h3><p>The hybrid implementation combines user threads and lightweight processes. User threads are still implemented entirely in user space, so their creation, switching, and destruction are still inexpensive and can support a large number of user threads. The operating system provides support for lightweight processes, which act as a bridge between user threads and kernel threads. This allows for the utilization of the kernel’s thread scheduling and processor mapping capabilities. System calls made by user threads are handled through lightweight processes, greatly reducing the risk of the entire process being completely blocked. In this hybrid model, the ratio between user threads and lightweight processes can vary, forming an N:M relationship. Many UNIX-based operating systems provide N:M thread model implementations, making it easier to apply the N:M thread model to applications running on these systems.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031193015.png"></p><center>Schematic of the N:M relationship between user threads and lightweight processes</center><h2 id="Java-Thread-Implementation"><a href="#Java-Thread-Implementation" class="headerlink" title="Java Thread Implementation"></a>Java Thread Implementation</h2><p>The implementation of Java threads is heavily influenced by the thread models supported by the underlying operating systems. The JVM specification does not dictate the specific thread model that must be used. The choice of thread model primarily affects the concurrency scale and operational costs of threads. However, for Java developers, these differences are transparent. Java, as a higher-level application, does not require developers to be concerned with the specific details of thread models. Before JDK 1.2, Java threads used a user-level thread implementation called “Green Threads”. However, starting from JDK 1.3, the thread model was replaced with an implementation based on the native thread model of the operating system, using a 1:1 thread model. The most commonly used JVM for Java SE is the HotSpot VM developed by Oracle&#x2F;Sun. In the newer versions of HotSpot VM that are supported on all platforms except Solaris, the 1:1 thread model is used. This means that a Java thread is directly implemented using a native operating system thread, without any additional indirect structures in between. Furthermore, HotSpot VM does not interfere with thread scheduling and leaves it entirely to the underlying operating system.</p><h2 id="Thread-Scheduling"><a href="#Thread-Scheduling" class="headerlink" title="Thread Scheduling"></a>Thread Scheduling</h2><p>Thread scheduling refers to the process of assigning processor usage rights to threads in a system. There are two main scheduling approaches: cooperative thread scheduling and preemptive thread scheduling. In a cooperative scheduling system, each thread controls its own execution time. After completing its work, a thread needs to actively notify the system to switch to another thread. Cooperative scheduling has the advantage of simplicity as there are no thread synchronization issues since the thread switch is known by the thread itself. However, cooperative scheduling also has significant drawbacks. The execution time of a thread is not controlled, and if a thread encounters a problem and fails to notify the system to switch threads, the entire process may be indefinitely blocked. In a preemptive scheduling system, the system allocates execution time to each thread. Thread switches are not controlled by the thread itself (although in Java, Thread.yield() can be used to give up execution time, but the thread itself cannot control acquiring execution time). In this scheduling implementation, the execution time of threads is controlled by the system, and a thread blocking the process indefinitely is avoided. Java uses preemptive scheduling as its thread scheduling mechanism. If a process encounters a problem, we can terminate it using the “Task Manager” without causing the entire system to crash.</p><p>When a system has a single processor, the operating system can handle multitasking and switch between multiple threads. The processor assigns CPU time slices to each thread for execution. A CPU time slice is the duration of time allocated by the CPU for a thread to execute its tasks and is usually a few tens of milliseconds. During this short time, threads are switched back and forth so rapidly that we do not perceive the switch, making it appear as if they are running simultaneously. The time slice determines how long a thread can continuously occupy the processor for execution. When a thread’s time slice is exhausted or it is forced to pause due to its own reasons, another thread (either the same thread or a thread from another process) is chosen by the operating system to occupy the processor. This process of one thread pausing and another thread being selected for execution is known as a context switch. A context switch involves saving the entire execution context of a thread to resume execution from where it left off. It includes variables, computed results, program counters, and more. It’s like taking a snapshot of the thread’s running environment so that when it regains CPU time, it can quickly restore the previous execution context by retrieving the saved data. This process is called a “context switch”. In a system with multiple CPUs, the operating system assigns CPUs to different threads in a round-robin manner, resulting in more frequent context switches, especially when switching across different CPUs, which are more expensive than context switches within a single CPU. In the context of multithreaded programming, we are mainly concerned with the performance impact of context switches between threads. Now, let’s explore the reasons behind context switches in multithreading.</p><h2 id="Thread-States"><a href="#Thread-States" class="headerlink" title="Thread States"></a>Thread States</h2><p>System threads primarily have five states: “New”, “Runnable”, “Running”, “Blocked”, and “Dead”. In the Java context, they are mapped to the following six states: “NEW”, “RUNNABLE”, “BLOCKED”, “WAITING”, “TIMED_WAITING”, and “TERMINATED”. The transition of a thread’s state from “RUNNING” to “BLOCKED” or from “BLOCKED” to “RUNNABLE” triggers a context switch between threads. Let’s take a look at the different scenarios that can lead to context switches.</p><ul><li><p>Time Slice Exhaustion: When a thread’s time slice is exhausted, the operating system forcefully switches to another thread to ensure fair CPU time allocation to other threads.</p></li><li><p>Preemption by Higher Priority Thread: If a thread with a higher priority needs to execute, the operating system interrupts the execution of the current thread and switches to the higher priority thread.</p></li><li><p>Blocked Operation: When a thread performs a blocked operation, such as waiting for I&#x2F;O completion or waiting for a lock to be released, the operating system places that thread in a blocked state and switches to another executable thread to fully utilize CPU resources.</p></li><li><p>Thread Synchronization: When multiple threads need to access shared resources, thread synchronization operations such as mutex locks or semaphores are used. In this case, when one thread acquires the synchronization resource, other threads may need to wait, leading to a context switch.</p></li><li><p>Interrupt Handling: When a hardware or software interrupt occurs, the operating system interrupts the execution of the current thread and switches to handling the interrupt event, which can cause a thread switch.</p></li></ul><p>In these scenarios, the operating system determines which thread to switch to based on the scheduling algorithm and priority rules, and performs a context switch by saving and restoring the thread’s execution context.</p><h2 id="Performance-Impact-of-Context-Switches"><a href="#Performance-Impact-of-Context-Switches" class="headerlink" title="Performance Impact of Context Switches"></a>Performance Impact of Context Switches</h2><p>Context switches have a performance impact on a system due to the overhead incurred during the switch. Context switches involve saving the execution context of a thread and restoring it when it resumes execution. This process requires memory operations and can be time-consuming, especially when switching across different CPUs, since the processor caches may need to be invalidated and reloaded. The frequency and duration of context switches can affect the overall performance of a system. However, the impact of context switches on system performance depends on various factors, such as the frequency of thread switches, the number of threads, the efficiency of the scheduling algorithm, and the workload characteristics. In general, reducing the number of unnecessary context switches and optimizing the scheduling algorithm can help improve system performance.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Understanding Java threads and the concept of thread scheduling is crucial for developing efficient and concurrent applications. Java provides a unified interface for thread operations across different hardware and operating system platforms. The implementation of Java threads is influenced by the underlying thread models supported by the operating system. The choice of thread model affects the scalability and operational costs of threads. Thread scheduling determines how CPU time is allocated to different threads, and context switches play a significant role in achieving fair and efficient resource utilization. By optimizing thread usage and minimizing unnecessary context switches, developers can improve the performance of their Java applications. So, whether you’re a beginner or an experienced Java developer, it’s important to have a solid understanding of Java threads and the impact of thread scheduling on application performance.</p><p>Remember, the key to successful multithreaded programming lies in balancing the workload, minimizing contention, and ensuring efficient resource utilization. With the right knowledge and skills, you can leverage the power of Java threads to build robust and high-performance applications.</p>]]></content>
    
    
    <summary type="html">Understanding the concepts of Java threads and thread scheduling is critical to developing efficient concurrent applications.</summary>
    
    
    
    <category term="Technical section" scheme="https://www.nablepart.com/categories/Technical-section/"/>
    
    
    <category term="Cpu scheduling" scheme="https://www.nablepart.com/tags/Cpu-scheduling/"/>
    
    <category term="Thread" scheme="https://www.nablepart.com/tags/Thread/"/>
    
  </entry>
  
  <entry>
    <title>创业的困难与挑战</title>
    <link href="https://www.nablepart.com/7dde4cf41a38/"/>
    <id>https://www.nablepart.com/7dde4cf41a38/</id>
    <published>2023-10-30T15:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031125047.png"></p><p>创业一直以来都是一项艰巨的任务，尤其是在当今市场竞争激烈的环境中。无论是市场饱和、成本上升还是努力付出却难以盈利，创业者们都面临着越来越多的问题和困难。据中国银保监会的数据显示，我国中小企业的平均寿命只有3年左右，而在成立3年后能够持续正常经营的只占三分之一。这样的数据让人心生忧虑，但是我们也见过一些企业能够成功地度过这个阶段并经营得不错。那么，他们是如何做到的呢？</p><blockquote><p>“我真是疯了,才会去创业。”</p></blockquote><blockquote><p>“以后啊,谁劝我去创业,我也不干了。不创业,后悔两年,创了业,后悔一辈子。”</p></blockquote><p>最近,我听到这些话的频率,越来越高了。</p><p>确实,创业从来都不是一个简单的事情,尤其是最近几年。</p><p>太多品类的市场几乎饱和,想要挤进去,比插一根针还要难。</p><p>成本越来越高,有时候多卖一件产品,就要多亏两块钱。</p><p>明明已经很努力,白天干,夜里忙,都已经996、007了,身体都熬坏了,还是赚不到几个钱。</p><p>哎,创业真的太难了,遇到的问题,也越来越多了。</p><p>越来越难的处境,越来越多的问题,意味着什么?</p><p>意味着活下来的概率,越来越低了。</p><p>我能不能也学习学习,把企业做的久一点?</p><p>这个问题,可就大了,也很难回答。</p><p>不过,要回答大问题,找经典答案,找被时间检验过的智慧,可能是非常靠谱的选择。</p><p>比如,我就向你推荐这么一个人。</p><p>伊查克·爱迪思(Ichak Adizes)。</p><p>为什么是他?</p><p>他被誉为美国最有影响力的管理学家之一,也是企业生命周期理论创立者,更是组织变革和治疗的专家。</p><p>他通过自己在企业和政府部门超过30年的组织诊疗经验,开发出了“爱迪思诊疗法”,写出了《企业生命周期》这本书。</p><p>在书里,他认为企业如人,会生长,也会衰老甚至死亡。</p><p>在生命周期的每个阶段,他都指出了可能存在的陷阱,并且给出了自己的建议。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031124131.png"></p><center>(企业生命周期阶段示意图)</center><p>如果你还没有读过这本书,我建议你,至少要花一点时间,看看这篇文章。</p><p>接下来，我们将从创业阶段开始，探讨企业生命周期的各个阶段以及其中可能遇到的问题和解决方法。</p><h2 id="创新精神"><a href="#创新精神" class="headerlink" title="创新精神"></a>创新精神</h2><p>创业是一切的起点。当你感觉到内心有一种冲动，想要做点事情时，你就已经迈出了创业的第一步。创新精神就像是击球，需要预测球的落点，并立即行动。创新精神包括前瞻性和冒险精神，通过这两个动作，创业者能够在市场中找到机会并迅速行动。</p><h2 id="创业空想"><a href="#创业空想" class="headerlink" title="创业空想"></a>创业空想</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031125153.png"></p><p>在创业初期，你可能会遇到许多困难和挑战。比如，找到合适的办公地点、招聘员工、改进产品等。创业并不是一帆风顺的，需要克服各种细节和坑。这时候，你需要坚持并相信自己创业的价值，不能因一时的亏损而打退堂鼓。每一个创业者都通过努力降低客户的不确定性，实现产品的确定性。</p><p>每一个创业者所做的事情,其实都是把客户的不确定性,通过自己的努力,通过团队的努力,通过产品的努力,把它降低,降低,再降低,直到成为确定性。</p><p>如果是先找到哪里有钱,再去赚那个钱,那么你寻找的事情,本质上就是套利,是一个赚快钱的机会。</p><p>可赚到了快钱之后,团队的心就散了,再也没兴趣埋下头,一点点艰苦创业了,也就注定做不出长久的事业了。</p><p>因为你浪费了自己最宝贵的资源,时间。</p><p>这些时间,你本来可以做更有价值的事情。</p><p>满足更多的市场需求,创造更多的价值,积累更多的资产……</p><p>好吧。从浪漫的想法到实际上的行动,并不容易。</p><p>稍有不慎,创业就成为了空想。</p><p>现在,假设你一步一个脚印,真的走过来了。</p><p>公司没有成为空想,你也成功地拥有了一家“婴儿期”的企业。</p><p>你需要有人来负责销售,有人来负责生产,还要有人负责招人。</p><p>可是这样,你就可能会发现,全靠激情,全靠自己扑上去一点点啃骨头,有点行不通了。</p><p>怎么办呢?</p><p>别着急,你可能需要引入“目标管理”机制。</p><p>让团队明白“做什么”</p><h2 id="婴儿期的企业"><a href="#婴儿期的企业" class="headerlink" title="婴儿期的企业"></a>婴儿期的企业</h2><p>当你成功创立了一家初创企业后，需要面对销售、生产和招聘等一系列问题。这时候，单靠个人激情和努力已经不够了，你需要引入目标管理机制。目标管理能让团队明确自己的任务和目标，并分工合作，达成共同的目标。企业的目标不仅仅是利润，而是围绕着创造价值、满足客户需求展开。</p><p>爱迪思博士举了一个例子。</p><p>说有这么有五个人,正在沿着窄窄的山路,从山顶下山。</p><p>这是唯一的一条路,旁边都是高高的树丛和散乱的石头,没办法下脚,所以没办法绕路。</p><p>就在这时,他们发现山路上趴着一块石头。  </p><p>看来,必须得把它挪开了。</p><p>最强壮的人活动了一下手腕,示意让他来。</p><p>结果他上去试了试,发现搬不动。</p><p>怎么办呢?五个人一起上吧,齐心协力把石头搬开。</p><p>这时候,目标管理就开始了。</p><p>让团队明白“做什么”。</p><p>可企业的目标又是什么呢?是利润吗?</p><p>爱迪思博士说,不是的。  </p><p>利润,就像爱情一样。它只是个结果,不能是目标。</p><p>如果你天天把“我想要爱情,想要获得真爱”恶狠狠地挂在嘴边,那么最后,你多半不会如愿。</p><p>因为爱情只是一个结果,它不是目标。</p><p>目标是什么呢?目标是和你真正产生爱意,坠入爱河的人,在一起过好每一天。</p><p>所以,利润也只是企业的结果,而不是企业的目标。</p><p>还记得一开始,你是为了什么而创业的吗?</p><p>是一个机会,一个创造价值的机会,一个让客户痛不欲生的需求。</p><p>这才是你真正要搬开的石头。</p><p>围绕目的,围绕应该创造的价值,拆分目标,分工实现。</p><p>不断满足客户的需求,不断完成对客户的承诺,不断产出应该产出的结果。</p><p>把目标管理做好了,初创公司就能在很短的时间内,获得短期效益。</p><p>可是,状况又发生了。</p><p>短期的效益虽然实现的很好,可你一直埋头在目标管理当中,累得不行。</p><p>当初的激情,热情,创新精神,都淹没在了重复的工作中。</p><p>一次次的酒桌应酬,一次次上门拜访客户,一点点改良产品。</p><p>心好累,还创什么业呀。要不,不干了吧?</p><p>这时候,刚刚成长为婴儿的企业,就会面临“夭折”的风险。</p><h2 id="婴儿期的挑战"><a href="#婴儿期的挑战" class="headerlink" title="婴儿期的挑战"></a>婴儿期的挑战</h2><p>婴儿期的企业常常面临着巨大的挑战。在这个阶段，最重要的是要先拿到结果，兑现对客户的承诺。创业者需要面对数字、风险和员工工资等现实问题。虽然初期的效益可能不错，但你可能会感到疲惫不堪，激情和创新精神也逐渐消失。然而，只有坚持努力并实现目标，企业才能度过这个阶段。</p><h2 id="青少年期的企业"><a href="#青少年期的企业" class="headerlink" title="青少年期的企业"></a>青少年期的企业</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031130032.png"></p><p>当企业度过了婴儿期，进入了青少年期，就会面临新的挑战。在这个阶段，企业需要进行持续的创新和发展，以应对市场的变化和竞争的压力。同时，管理者也需要逐渐转变角色，从具体操作转向战略决策，并建立起更加完善的组织结构和管理体系。</p><h2 id="成熟期的企业"><a href="#成熟期的企业" class="headerlink" title="成熟期的企业"></a>成熟期的企业</h2><p>当企业进入成熟期后，重点就是提高效率和稳定发展。在这个阶段，企业需要注重内部管理，提升产品品质和服务水平，以保持客户的忠诚度。同时，企业也需要寻找新的增长点，开拓新的市场，以应对竞争的挑战。</p><h2 id="衰老期的企业"><a href="#衰老期的企业" class="headerlink" title="衰老期的企业"></a>衰老期的企业</h2><p>衰老期是企业生命周期的最后阶段，此时企业面临着衰退和衰败的风险。在这个阶段，企业需要及时调整战略，适应市场变化，并寻找新的发展机会。同时，企业也需要更加注重创新和组织变革，以延长企业的寿命。</p><h2 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h2><p>创业并不是一件容易的事情，但通过学习和借鉴企业生命周期理论，我们可以更好地理解创业过程中的困难和挑战，并寻找解决问题的方法。创新精神、目标管理和持续创新等都是成功创业的关键要素。只有坚持不懈地努力，才能够让企业度过每个阶段，获得长期的成功和稳定发展。</p>]]></content>
    
    
    <summary type="html">本文讨论了企业从诞生到衰老的生命周期,以及不同成长阶段可能面临的问题和应对之道。</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="创业" scheme="https://www.nablepart.com/tags/%E5%88%9B%E4%B8%9A/"/>
    
    <category term="盈利" scheme="https://www.nablepart.com/tags/%E7%9B%88%E5%88%A9/"/>
    
  </entry>
  
  <entry>
    <title>You Are Your Greatest Investment-Unlocking Your Potential for Success</title>
    <link href="https://www.nablepart.com/0302fbbaa9ad/"/>
    <id>https://www.nablepart.com/0302fbbaa9ad/</id>
    <published>2023-10-30T13:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>When it comes to investing, Warren Buffet, the legendary investor, said it best: “By far, the best investment you can make is in yourself.” While this statement may initially sound cliché, it holds profound wisdom. Investing in yourself goes beyond the traditional understanding of self-care and continuous learning. It encompasses a broader perspective that can revolutionize your life. In this article, we will explore the true meaning behind this statement and delve into why you are your most valuable asset.</p></blockquote><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031105949.png"></p><h2 id="The-Power-of-Value-Investing"><a href="#The-Power-of-Value-Investing" class="headerlink" title="The Power of Value Investing"></a>The Power of Value Investing</h2><p>In the world of investing, different strategies exist, each with its own unique approach and potential for profitability. One of the most renowned strategies is value investing. Unlike momentum investing, which focuses on short-term gains and constant monitoring of stock performance, value investing takes a long-term perspective. It involves identifying undervalued assets and patiently waiting for the market to recognize their true worth. This approach not only applies to financial investments but also to our personal growth and development.</p><h3 id="Unleashing-Your-Potential"><a href="#Unleashing-Your-Potential" class="headerlink" title="Unleashing Your Potential"></a>Unleashing Your Potential</h3><p>Similar to value investing, investing in yourself means recognizing your intrinsic value and nurturing it to reach its full potential. This investment goes beyond external factors such as money and time. It involves prioritizing your physical and mental well-being, acquiring new skills, and cultivating a growth mindset. By doing so, you position yourself to seize opportunities, overcome challenges, and achieve success.</p><h3 id="Prioritizing-Physical-and-Mental-Well-being"><a href="#Prioritizing-Physical-and-Mental-Well-being" class="headerlink" title="Prioritizing Physical and Mental Well-being"></a>Prioritizing Physical and Mental Well-being</h3><p>Taking care of your health is the foundation of personal growth. Just as a value investor maintains a healthy portfolio, you must maintain a healthy body and mind. This includes regular exercise, proper nutrition, sufficient sleep, and effective stress management. By prioritizing your well-being, you enhance your energy levels, mental clarity, and overall productivity.</p><h3 id="Acquiring-New-Skills"><a href="#Acquiring-New-Skills" class="headerlink" title="Acquiring New Skills"></a>Acquiring New Skills</h3><p>Investing in yourself also means continuously expanding your knowledge and acquiring new skills. Just as value investors research and analyze potential investments, you should seek out opportunities for personal and professional development. This may involve enrolling in courses, attending seminars or conferences, or even pursuing advanced degrees. By broadening your skill set, you increase your value in the marketplace and open doors to new opportunities.</p><h3 id="Cultivating-a-Growth-Mindset"><a href="#Cultivating-a-Growth-Mindset" class="headerlink" title="Cultivating a Growth Mindset"></a>Cultivating a Growth Mindset</h3><p>A growth mindset is a vital component of investing in yourself. Embracing a growth mindset means believing in your ability to learn and grow throughout your life. Just as value investors remain patient during market downturns, you must persevere through setbacks and challenges. By adopting a growth mindset, you view failures as learning opportunities and setbacks as stepping stones toward success.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031110027.png"></p><h2 id="The-Ripple-Effect"><a href="#The-Ripple-Effect" class="headerlink" title="The Ripple Effect"></a>The Ripple Effect</h2><p>Investing in yourself not only benefits you personally but also has a ripple effect on those around you. Just as value investing can generate positive outcomes for the broader economy, your personal growth can positively impact your relationships, career, and overall well-being.</p><h3 id="Enhancing-Relationships"><a href="#Enhancing-Relationships" class="headerlink" title="Enhancing Relationships"></a>Enhancing Relationships</h3><p>When you invest in yourself, you become a better partner, friend, and family member. By prioritizing self-improvement, you develop greater emotional intelligence, communication skills, and empathy. These qualities enhance your ability to build and maintain meaningful relationships, leading to a more fulfilling personal life.</p><h3 id="Advancing-Your-Career"><a href="#Advancing-Your-Career" class="headerlink" title="Advancing Your Career"></a>Advancing Your Career</h3><p>Investing in yourself is a catalyst for career growth and advancement. By continuously developing your skills and knowledge, you position yourself as an invaluable asset to your current or future employer. A commitment to self-improvement demonstrates ambition, adaptability, and a strong work ethic - qualities highly sought after in the professional world.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231031110121.png"></p><h3 id="Fostering-Overall-Well-being"><a href="#Fostering-Overall-Well-being" class="headerlink" title="Fostering Overall Well-being"></a>Fostering Overall Well-being</h3><p>The benefits of investing in yourself extend beyond personal and professional growth. When you prioritize your well-being and personal development, you cultivate a sense of fulfillment and happiness. This, in turn, positively affects your mental health, overall life satisfaction, and general outlook on life.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>In conclusion, Warren Buffet’s statement, “By far, the best investment you can make is in yourself,” holds profound significance. Investing in yourself goes beyond surface-level self-care and continuous learning; it is a commitment to unlocking your full potential. By adopting a value investing mindset towards your personal growth, you position yourself for long-term success. Prioritize your physical and mental well-being, acquire new skills, and cultivate a growth mindset. This investment in yourself will not only benefit you personally but also create a positive ripple effect on your relationships, career, and overall well-being. Remember, you are your most valuable asset - invest wisely.</p>]]></content>
    
    
    <summary type="html">Self-investment is a process of personal growth and enhancement. It is more than the traditional understanding of self-care and continuous learning; it is a holistic, multidimensional investment. It includes enhancing one&#39;s knowledge, skills, and experiences, as well as personal qualities and values. Through self-investment, we can better meet life&#39;s challenges and realize our goals and dreams.</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="Self-investment" scheme="https://www.nablepart.com/tags/Self-investment/"/>
    
    <category term="value added investment" scheme="https://www.nablepart.com/tags/value-added-investment/"/>
    
    <category term="Success" scheme="https://www.nablepart.com/tags/Success/"/>
    
    <category term="Invest wisely" scheme="https://www.nablepart.com/tags/Invest-wisely/"/>
    
  </entry>
  
  <entry>
    <title>The Reality of Full-Time Stock Trading-A Cautionary Tale</title>
    <link href="https://www.nablepart.com/6bea2dbed65e/"/>
    <id>https://www.nablepart.com/6bea2dbed65e/</id>
    <published>2023-10-30T12:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030153347.png"></p><p>In today’s fast-paced and volatile stock market, many individuals are enticed by the allure of making quick and substantial profits. However, the reality of full-time stock trading is often far from the dream of becoming the next Warren Buffett. This cautionary tale follows the journey of a 27-year-old individual who, after five months of full-time stock trading, experienced a nearly 50% loss in their stock portfolio. The story sheds light on the challenges, emotional toll, and harsh realities of pursuing a career as a full-time stock trader.</p><h2 id="The-Illusion-of-Freedom-Leaving-a-Job-for-Stock-Trading"><a href="#The-Illusion-of-Freedom-Leaving-a-Job-for-Stock-Trading" class="headerlink" title="The Illusion of Freedom: Leaving a Job for Stock Trading"></a>The Illusion of Freedom: Leaving a Job for Stock Trading</h2><p>In May of this year, the protagonist of our tale made the decision to leave their job and try their hand at full-time stock trading. Dissatisfied with their previous work environment and fueled by the belief that they could become the next stock market success story, they embarked on this new venture with high hopes and a capital of 200,000 CNY.</p><p>The individual’s initial experience in the stock market was discouraging, as their first stock purchase resulted in a loss. However, undeterred by this setback, they continued to follow stock recommendations from popular online figures, hoping to ride the wave of profitable opportunities. While some of these investments yielded positive returns, a friend’s cautionary tale about a stock market training program scam served as a wake-up call. Realizing the risks involved, the individual decided to rely on their own analysis and recommendations from fellow traders.</p><h2 id="The-Roller-Coaster-Ride-A-Series-of-Losses"><a href="#The-Roller-Coaster-Ride-A-Series-of-Losses" class="headerlink" title="The Roller Coaster Ride: A Series of Losses"></a>The Roller Coaster Ride: A Series of Losses</h2><p>The stock market, like any other market, experiences trends and fads. The protagonist found themselves caught up in the hype surrounding artificial intelligence (AI) stocks and, against their initial reservations, decided to invest. Unfortunately, their timing was off, and they entered the market just as the AI trend was fading. The individual’s investment in an AI-related stock quickly plummeted, resulting in a significant loss.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030153554.png"></p><p>Undeterred by this setback, the individual turned to short-term trading strategies, often participating in post-market analysis sessions with fellow traders. Together, they would identify stocks that experienced daily price limits and analyze their patterns and market sectors. Despite their efforts, the individual’s trading endeavors continued to result in losses, leading to frustration, sleepless nights, and a sense of helplessness.</p><h2 id="The-Psychological-Toll-Fear-Greed-and-Desperation"><a href="#The-Psychological-Toll-Fear-Greed-and-Desperation" class="headerlink" title="The Psychological Toll: Fear, Greed, and Desperation"></a>The Psychological Toll: Fear, Greed, and Desperation</h2><p>As the losses piled up, the individual’s mental state deteriorated. The initial anxiety soon turned into sleepless nights, irritability, and a sense of being overwhelmed. The fear of missing out on potential profits led to impulsive and irrational decisions, further exacerbating their losses. The individual’s focus shifted from careful analysis to desperation, resulting in a distorted and unhealthy approach to trading.</p><p>The combination of unemployment, financial pressure, and the constant need to make up for losses created a toxic environment. The individual’s once-passionate pursuit of stock trading turned into a soul-sucking and relentless cycle of buying and selling, hoping for a reversal of fortune. The constant losses and the accompanying emotional roller coaster took a toll on their mental well-being.</p><h2 id="The-Awakening-Accepting-Reality-and-Moving-Forward"><a href="#The-Awakening-Accepting-Reality-and-Moving-Forward" class="headerlink" title="The Awakening: Accepting Reality and Moving Forward"></a>The Awakening: Accepting Reality and Moving Forward</h2><p>In mid-August, faced with mounting losses and a personal heartbreak, the individual took a week-long break from trading. Upon their return, they were met with a harsh reality: many of their stocks had plummeted even further during their absence. At that moment, their confidence in value investing shattered. They realized that the stock market was not the right path for them.</p><p>In September, the individual found employment in the technology industry, finally escaping the clutches of the stock market. Reflecting on their experience, they recognized the dangers of overtrading, the psychological toll of constant losses, and the importance of maintaining a stable income. They also acknowledged the pitfalls of relying on stock market training programs and the need for independent analysis.</p><h2 id="Lessons-Learned-The-Harsh-Realities-of-Full-Time-Stock-Trading"><a href="#Lessons-Learned-The-Harsh-Realities-of-Full-Time-Stock-Trading" class="headerlink" title="Lessons Learned: The Harsh Realities of Full-Time Stock Trading"></a>Lessons Learned: The Harsh Realities of Full-Time Stock Trading</h2><p>The protagonist’s story serves as a cautionary tale, highlighting the harsh realities of full-time stock trading. While some individuals may find success in the stock market, the vast majority are driven by the hope of overnight wealth, which often leads to irrational decision-making and substantial losses. The allure of quick profits can cloud judgment, causing individuals to neglect proper research, risk management, and emotional well-being.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030154727.png"></p><p>The individual’s experience also emphasizes the detrimental effects of unemployment and financial pressure on stock trading. Without a stable income, the psychological burden of trading becomes overwhelming, leading to impulsive and desperate actions. Furthermore, the constant need to make up for losses can trap individuals in a cycle of self-destructive behavior.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>The reality of full-time stock trading is far from the glamorous image often portrayed in media and online forums. The protagonist’s journey serves as a reminder that success in the stock market is not guaranteed, and the pursuit of quick profits can lead to financial ruin and emotional distress. Value investing, independent analysis, and a stable income are crucial elements for those considering a career in stock trading.</p><p>While the stock market may hold the potential for substantial gains, it is essential to approach it with caution, realistic expectations, and a well-rounded understanding of the risks involved. The protagonist’s decision to leave the stock market and find stable employment reflects the importance of recognizing one’s limitations and making sound financial choices. Ultimately, the lesson learned is that full-time stock trading is not for everyone, and it should be approached with careful consideration of the potential risks and sacrifices involved.</p>]]></content>
    
    
    <summary type="html">In today&#39;s fast-paced, volatile stock market, many people are drawn to the lure of making a quick and lucrative profit. This story reveals the challenges, emotional toll and harsh reality of full-time stock trading.</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="Stock" scheme="https://www.nablepart.com/tags/Stock/"/>
    
    <category term="Inducement" scheme="https://www.nablepart.com/tags/Inducement/"/>
    
    <category term="Exposures" scheme="https://www.nablepart.com/tags/Exposures/"/>
    
  </entry>
  
  <entry>
    <title>寺庙餐饮-打造独具一格的美食体验</title>
    <link href="https://www.nablepart.com/f379bae44a3e/"/>
    <id>https://www.nablepart.com/f379bae44a3e/</id>
    <published>2023-10-30T12:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="导言"><a href="#导言" class="headerlink" title="导言"></a>导言</h2><p>寺庙餐饮近年来在中国市场掀起了一股热潮。从寺庙咖啡、寺庙奶茶到寺庙火锅，这些独特的餐饮概念在年轻人中间引起了轰动。寺庙作为具有广泛影响力的大IP，凭借着其数百年的香火传承和“吃斋念佛”的理念，成为了中国美食领域的重要流派。本文将深入探讨寺庙餐饮的独特之处，并提供了一些承接寺庙流量红利的最佳姿势。</p><h2 id="独具一格的寺庙餐饮"><a href="#独具一格的寺庙餐饮" class="headerlink" title="独具一格的寺庙餐饮"></a>独具一格的寺庙餐饮</h2><p>寺庙餐饮之所以能够在市场中脱颖而出，是因为其独特的经营模式和口味。寺庙的餐饮菜单通常十分精简，但却能够做出精致美味的素斋。寺庙的素面、素蟹粉等菜品都成为了中国美食领域的重要代表。这些菜品之所以能够保持如此高的品质和口感，是因为寺庙餐饮一直坚持使用豆皮面筋、蔬菜瓜果等天然食材进行制作。即使是普通常见的素面和糕点甜品，在寺庙IP的特别背书下，也变得稀缺起来。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030192436.png"></p><p>寺庙餐饮的另一个独特之处在于其起名的创意。例如，永福寺推出的“慈杯”咖啡，给每一款咖啡产品起了一个富有寺庙文化色彩的名字，如美式叫“涤烦”，拿铁叫“停雪”，摩卡叫“欢喜”，抹茶拿铁叫“听山语”等。这些独特的起名方式深入人心，成为了年轻人们喜爱的品牌。</p><h2 id="寺庙餐饮的红利"><a href="#寺庙餐饮的红利" class="headerlink" title="寺庙餐饮的红利"></a>寺庙餐饮的红利</h2><p>寺庙作为一个宗教场所和旅游景点，拥有源源不断的游客流量。近年来，随着越来越多的年轻人被吸引而来，寺庙周边的餐饮和商业也发生了明显的变化。寺庙周边出现了更多容易受到年轻人欢迎的品类，尤其是那些方便打卡拍照的餐饮店。这些年轻人的涌入不仅使得一些老字号重新翻红，也推动了一些高端餐饮的崛起。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030193024.png"></p><p>以北京雍和宫为例，周边出现了更多受年轻人欢迎的品类。吴裕泰茶庄的冰淇淋成为了雍和宫打卡必备的产品，而梅潭村、三元梅园的中式甜品也颇受欢迎。一些高端餐饮也在寺庙附近逐渐兴起，如京兆尹就以素食为主打，成功打造了高端餐饮品牌，并多次获得米其林三星认证。</p><h2 id="寺庙外的餐饮生意"><a href="#寺庙外的餐饮生意" class="headerlink" title="寺庙外的餐饮生意"></a>寺庙外的餐饮生意</h2><p>除了寺庙内的餐饮，寺庙外也出现了不少能够承接寺庙流量红利的餐饮店。一些餐饮业者通过找到一个合适的切入点，提供一些关联性强、非同质化、讨好年轻人的餐饮产品和服务，成功吸引了年轻人的关注。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030192620.png"></p><p>在杭州灵隐寺附近，有一家名为“隐市”的商业街区。虽然这里人流量大且喧嚣，但一些餐饮店却并没有吸引到寻求新鲜体验的年轻人。相反，上海龙华寺对面的“龙华会”更具新潮和年轻气息。龙华会位于连接轨道交通11号线和12号线的地段，商业面积约为10万平方米。这里吸引了不少餐饮首店，提供了多样化的餐饮选择，成为了徐汇区商场热门排行榜的第一名。</p><h2 id="寺庙餐饮的市场潜力"><a href="#寺庙餐饮的市场潜力" class="headerlink" title="寺庙餐饮的市场潜力"></a>寺庙餐饮的市场潜力</h2><p>寺庙餐饮市场充满了巨大的潜力。寺庙内的餐饮虽然传统，但其极致简单、口味上的用心和稀缺性，依然是餐饮竞争的利器。然而，要在寺庙周边吸引更多的顾客，仅仅依靠区位优势是不够的。餐饮业者需要深度挖掘创新元素，解析年轻群体的情绪价值，提供非同质化的产品和服务，引发年轻人共鸣。同时，连锁餐饮也可以借助商文旅融合项目的机会，分享寺庙流量红利。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030193100.png"></p><p>最后，寺庙餐饮市场虽然充满了潜力，但餐饮业者必须要有足够的创新能力和市场洞察力才能在这个市场中取得成功。只有通过不断地满足年轻人的需求，提供独特而又符合他们口味的产品和服务，才能在这个竞争激烈的市场中占据一席之地。</p><h2 id="结语"><a href="#结语" class="headerlink" title="结语"></a>结语</h2><p>寺庙餐饮在中国市场的崛起不仅展现了寺庙文化的魅力，也证明了其在年轻人中的吸引力。寺庙餐饮通过独特的经营模式和口味，以及与商业、旅游等领域的结合，打造了一种独具一格的美食体验。对于餐饮业者来说，要在寺庙周边吸引更多的顾客，就需要深入挖掘创新元素，提供符合年轻人口味的产品和服务。通过抓住这个市场的机遇，不断迭代和创新，寺庙餐饮市场将会持续发展壮大。</p><p>本文为原创文章，转载请联系作者。</p>]]></content>
    
    
    <summary type="html">寺庙这门生意，可谓是从今年年头火到了年尾。</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="流量红利" scheme="https://www.nablepart.com/tags/%E6%B5%81%E9%87%8F%E7%BA%A2%E5%88%A9/"/>
    
    <category term="寺庙餐饮" scheme="https://www.nablepart.com/tags/%E5%AF%BA%E5%BA%99%E9%A4%90%E9%A5%AE/"/>
    
  </entry>
  
  <entry>
    <title>Team Spirit take the Ti12 overall title</title>
    <link href="https://www.nablepart.com/bb04e5b091d1/"/>
    <id>https://www.nablepart.com/bb04e5b091d1/</id>
    <published>2023-10-30T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/31/2zauEPpF3CS4q5d.png" alt="image.png"></p><h2 id="Team-Spirit-Takes-Ti12-Title-Chinese-Army-Takes-Third-Place"><a href="#Team-Spirit-Takes-Ti12-Title-Chinese-Army-Takes-Third-Place" class="headerlink" title="Team Spirit Takes Ti12 Title, Chinese Army Takes Third Place"></a>Team Spirit Takes Ti12 Title, Chinese Army Takes Third Place</h2><p>In the 2023 DOTA2 International Invitational Tournament that ended this morning, Team Spirit finally won the Ti12 Championship with a 3:0 victory over Team GG, which is also Team Spirit’s second podium after two years, and it also tied the record of two championships held by Team OG.</p><p>In addition, two teams from China, LGD and AR, took the third place and third place respectively. The two domestic teams were generally not favored by outsiders before the tournament, and with only two seats this time to take the third and fourth place, for domestic DOTA2 fans, it is already an over-achievement.</p><p>After the tournament, the game’s lead developer IceFrog also posted a message on Weibo congratulating Team Spirit on their win:</p><p><img src="https://s2.loli.net/2023/10/31/18Zz9NbY6StF4Jx.png" alt="image.png"></p><p>As a well-known “yearly blogger”, IceFrog basically only updates his microblog at the end of Ti every year, so he was also teased by netizens for “accomplishing the annual target”.</p><h2 id="Old-Friends-Chandler-actor-Matthew-Perry-passed-away"><a href="#Old-Friends-Chandler-actor-Matthew-Perry-passed-away" class="headerlink" title="Old Friends Chandler actor Matthew Perry passed away"></a>Old Friends Chandler actor Matthew Perry passed away</h2><p>According to TMZ, “Friends” Chandler actor Matthew Perry died of cardiac arrest at his home in Los Angeles at the age of 54. </p><p><img src="https://s2.loli.net/2023/10/31/Mr4LyoFnesgiGTJ.png" alt="image.png"></p><p>The classic comedy, which ran from 1994 to 2004, was the most watched TV show of the 2000s, with the final episode reaching tens of millions of viewers in the US.</p><p>Matthew Perry, who played one of the main characters, rose to fame and was nominated for an Emmy Award in 2002, but at the height of his fame, it was revealed that he was addicted to drugs due to alcoholism and medication, which is why the media made a point of mentioning that “no drugs were found at the scene” in the reports of his death. No drugs were found at the scene.”</p><p>Upon learning of the death, Old Friends officially posted a tribute, “We are deeply saddened to learn of the passing of Matthew Perry, he was a gift from God to all and our hearts are with his family, loved ones and all of his fans.”</p><p><img src="https://s2.loli.net/2023/10/31/fenKldvJx6BUQzm.png" alt="image.png"></p><h2 id="The-Sega-X-Taro-Yokoo-co-produced-handheld-game-announced-it-was-discontinuing-after-six-months-in-service"><a href="#The-Sega-X-Taro-Yokoo-co-produced-handheld-game-announced-it-was-discontinuing-after-six-months-in-service" class="headerlink" title="The Sega X Taro Yokoo co-produced handheld game announced it was discontinuing after six months in service"></a>The Sega X Taro Yokoo co-produced handheld game announced it was discontinuing after six months in service</h2><p><img src="https://s2.loli.net/2023/10/31/nWwBTKAO9lQyr3I.png" alt="image.png"></p><p>404 GAME RE:SET - Error Game Re:set-, a teenage shooter-style RPG developed by SEGA with Taro Yokoo as creative director, has announced that it will be discontinued on January 5, 2024; the game opened on April 5 and had a lifespan of just 255 days.</p><p><img src="https://s2.loli.net/2023/10/31/xtKkiyURfpZhd3e.png" alt="image.png"></p><p>The game’s worldview takes place in a twisted world ruled by SEGA, who have given self-will to various classic games and use their powers to change the world into what they want it to be. </p><p>As for the reason for the discontinuation, the discontinuation announcement states that “the development team has been unable to produce enough game content to maintain operations despite various tie-ins and out-of-game programs, and has been forced to make the decision to discontinue the game.”</p><p>However, although the game will be discontinued early next year, in order to allow players to continue to enjoy the game in its final hours, the official government will still maintain the original update program and launch new activities until the game is officially discontinued.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Team Spirit take the Ti12 overall title/&#39;Old Friends&#39; Chandler actor Matthew Perry dies</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="Team Spirit" scheme="https://www.nablepart.com/tags/Team-Spirit/"/>
    
    <category term="Old Friends" scheme="https://www.nablepart.com/tags/Old-Friends/"/>
    
    <category term="Matthew Perry" scheme="https://www.nablepart.com/tags/Matthew-Perry/"/>
    
  </entry>
  
  <entry>
    <title>How to Avoid Resource Leaks in Java</title>
    <link href="https://www.nablepart.com/dda228a946b6/"/>
    <id>https://www.nablepart.com/dda228a946b6/</id>
    <published>2023-10-30T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>Resource leaks can have serious consequences in Java applications, leading to performance degradation, memory leaks, and even system crashes. In this article, we will explore common scenarios that can cause resource leaks and provide solutions to prevent them. By following these best practices, you can ensure that your Java code is efficient, reliable, and free from resource leaks.</p></blockquote><h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Resource leaks occur when important resources such as files, database connections, or network connections are not properly closed and released in Java programs. This can result in the resources being unavailable for other processes, leading to performance issues, memory leaks, and even system crashes. In the following sections, we will discuss common scenarios that can cause resource leaks and provide solutions to prevent them.</p><h2 id="File-Resource-Leaks"><a href="#File-Resource-Leaks" class="headerlink" title="File Resource Leaks"></a>File Resource Leaks</h2><p>When working with files in Java, it is crucial to ensure that file streams are properly closed to avoid resource leaks. Here are two common scenarios and their corresponding solutions:</p><h3 id="Scenario-1-Failure-to-Close-FileInputStream-or-FileOutputStream"><a href="#Scenario-1-Failure-to-Close-FileInputStream-or-FileOutputStream" class="headerlink" title="Scenario 1: Failure to Close FileInputStream or FileOutputStream"></a>Scenario 1: Failure to Close FileInputStream or FileOutputStream</h3><p>In this scenario, the file stream is not closed using the close() method, which can lead to resource leaks. To prevent this, it is recommended to use the try-with-resources statement, which automatically closes the file stream after it is no longer needed.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span> (<span class="type">FileInputStream</span> <span class="variable">fis</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">FileInputStream</span>(<span class="string">&quot;file.txt&quot;</span>)) &#123;</span><br><span class="line">    <span class="comment">// 执行文件输入流操作</span></span><br><span class="line">&#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">    <span class="comment">// 处理异常</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="Scenario-2-Failure-to-Close-BufferedReader-or-BufferedWriter"><a href="#Scenario-2-Failure-to-Close-BufferedReader-or-BufferedWriter" class="headerlink" title="Scenario 2: Failure to Close BufferedReader or BufferedWriter"></a>Scenario 2: Failure to Close BufferedReader or BufferedWriter</h3><p>Similarly, when using buffered readers or writers, it is important to close the stream using the close() method to avoid resource leaks. The try-with-resources statement can also be used in this scenario.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span> (<span class="type">BufferedReader</span> <span class="variable">reader</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">BufferedReader</span>(<span class="keyword">new</span> <span class="title class_">FileReader</span>(<span class="string">&quot;file.txt&quot;</span>))) &#123;</span><br><span class="line">    <span class="comment">// 执行缓冲读取器操作</span></span><br><span class="line">    <span class="comment">// ...</span></span><br><span class="line">&#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">    <span class="comment">// 处理异常</span></span><br><span class="line">    <span class="comment">// ...</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="Database-Connection-Resource-Leaks"><a href="#Database-Connection-Resource-Leaks" class="headerlink" title="Database Connection Resource Leaks"></a>Database Connection Resource Leaks</h2><p>Improper handling of database connections can lead to resource leaks and performance degradation in Java applications. Here are two common scenarios and their solutions:</p><h3 id="Scenario-1-Failure-to-Close-Connection"><a href="#Scenario-1-Failure-to-Close-Connection" class="headerlink" title="Scenario 1: Failure to Close Connection"></a>Scenario 1: Failure to Close Connection</h3><p>If the close() method is not explicitly called on a database connection, it can result in resource leaks and the exhaustion of database connection pool resources. To prevent this, it is recommended to use the try-with-resources statement to automatically close the connection after it is no longer needed.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span> (<span class="type">Connection</span> <span class="variable">conn</span> <span class="operator">=</span> DriverManager.getConnection(url, username, password);</span><br><span class="line">     <span class="type">Statement</span> <span class="variable">stmt</span> <span class="operator">=</span> conn.createStatement();</span><br><span class="line">     <span class="type">ResultSet</span> <span class="variable">rs</span> <span class="operator">=</span> stmt.executeQuery(sql)) &#123;</span><br><span class="line">    <span class="comment">// Perform database operations</span></span><br><span class="line">&#125; <span class="keyword">catch</span> (SQLException e) &#123;</span><br><span class="line">    <span class="comment">// Handle exceptions</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="Scenario-2-Failure-to-Return-Connection-to-the-Connection-Pool"><a href="#Scenario-2-Failure-to-Return-Connection-to-the-Connection-Pool" class="headerlink" title="Scenario 2: Failure to Return Connection to the Connection Pool"></a>Scenario 2: Failure to Return Connection to the Connection Pool</h3><p>When using connection pooling libraries like Apache Commons DBCP, it is important to return the connection to the pool after it has been used. Failure to do so can result in resource leaks and the exhaustion of connection pool resources.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// Example using Apache Commons DBCP</span></span><br><span class="line"><span class="keyword">try</span> (<span class="type">Connection</span> <span class="variable">conn</span> <span class="operator">=</span> dataSource.getConnection()) &#123;</span><br><span class="line">    <span class="comment">// Perform database operations</span></span><br><span class="line">&#125; <span class="keyword">catch</span> (SQLException e) &#123;</span><br><span class="line">    <span class="comment">// Handle exceptions</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="Network-Connection-Resource-Leaks"><a href="#Network-Connection-Resource-Leaks" class="headerlink" title="Network Connection Resource Leaks"></a>Network Connection Resource Leaks</h2><p>In Java, improper handling of network connections can lead to resource leaks and network resource exhaustion. Here are two common scenarios and their solutions:</p><h3 id="Scenario-1-Failure-to-Close-Socket"><a href="#Scenario-1-Failure-to-Close-Socket" class="headerlink" title="Scenario 1: Failure to Close Socket"></a>Scenario 1: Failure to Close Socket</h3><p>If the close() method is not explicitly called on a socket, it can result in resource leaks and the occupation of network resources. To prevent this, it is recommended to use the try-with-resources statement to automatically close the socket after it is no longer needed.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span> (<span class="type">Socket</span> <span class="variable">socket</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">Socket</span>(<span class="string">&quot;host&quot;</span>, port)) &#123;</span><br><span class="line">    <span class="comment">// Perform network communication operations</span></span><br><span class="line">&#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">    <span class="comment">// Handle exceptions</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="Scenario-2-Failure-to-Close-ServerSocket"><a href="#Scenario-2-Failure-to-Close-ServerSocket" class="headerlink" title="Scenario 2: Failure to Close ServerSocket"></a>Scenario 2: Failure to Close ServerSocket</h3><p>When implementing server applications, it is important to close the ServerSocket after accepting client connections. Failure to do so can result in resource leaks and the occupation of network resources.</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span> (<span class="type">ServerSocket</span> <span class="variable">serverSocket</span> <span class="operator">=</span> <span class="keyword">new</span> <span class="title class_">ServerSocket</span>(port)) &#123;</span><br><span class="line">    <span class="keyword">while</span> (<span class="literal">true</span>) &#123;</span><br><span class="line">        <span class="type">Socket</span> <span class="variable">socket</span> <span class="operator">=</span> serverSocket.accept();</span><br><span class="line">        <span class="comment">// Handle client requests</span></span><br><span class="line">    &#125;</span><br><span class="line">&#125; <span class="keyword">catch</span> (IOException e) &#123;</span><br><span class="line">    <span class="comment">// Handle exceptions</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="Preventing-Resource-Leaks"><a href="#Preventing-Resource-Leaks" class="headerlink" title="Preventing Resource Leaks"></a>Preventing Resource Leaks</h2><p>To avoid resource leaks in Java applications, it is important to adopt good coding practices and ensure that resources are always explicitly closed after use. Here are some strategies to prevent resource leaks:</p><h3 id="Use-try-with-resources-Statement"><a href="#Use-try-with-resources-Statement" class="headerlink" title="Use try-with-resources Statement"></a>Use try-with-resources Statement</h3><p>The try-with-resources statement is a convenient way to automatically close resources after they are no longer needed. It ensures that the close() method is called on the resource, even if an exception occurs.</p><h3 id="Utilize-Connection-Pools"><a href="#Utilize-Connection-Pools" class="headerlink" title="Utilize Connection Pools"></a>Utilize Connection Pools</h3><p>Using connection pooling libraries, such as Apache Commons DBCP or HikariCP, can help manage database connections effectively. These libraries handle connection acquisition, release, and recycling, reducing the risk of resource leaks.</p><h3 id="Use-Network-Frameworks"><a href="#Use-Network-Frameworks" class="headerlink" title="Use Network Frameworks"></a>Use Network Frameworks</h3><p>When working with network connections, it is advisable to use reliable network frameworks like Apache HttpClient or Netty. These frameworks handle connection management and provide features to prevent resource leaks.</p><h3 id="Test-and-Monitor"><a href="#Test-and-Monitor" class="headerlink" title="Test and Monitor"></a>Test and Monitor</h3><p>Regularly testing and monitoring your Java applications can help identify and resolve potential resource leaks. Automated testing, code reviews, and performance profiling can help detect and fix resource leak issues before they impact system stability and performance.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Resource leaks are common issues in Java programs, especially when dealing with important resources like files, database connections, and network connections. By adopting good coding practices, such as using the try-with-resources statement, connection pooling libraries, and reliable network frameworks, you can prevent resource leaks and ensure the stability and performance of your applications. Remember to always close resources explicitly and test and monitor your code regularly for potential issues.</p>]]></content>
    
    
    <summary type="html">In this article, we will explore common situations that can lead to resource leaks and provide solutions to prevent them. By following these best practices, you can ensure that your Java code is efficient, reliable, and free of resource leaks.</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="Java" scheme="https://www.nablepart.com/tags/Java/"/>
    
    <category term="resource leakage" scheme="https://www.nablepart.com/tags/resource-leakage/"/>
    
  </entry>
  
  <entry>
    <title>投资人的抑郁症一种常见但被忽视的问题</title>
    <link href="https://www.nablepart.com/29bd1010f4b8/"/>
    <id>https://www.nablepart.com/29bd1010f4b8/</id>
    <published>2023-10-29T12:28:28.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>创业者的抑郁症一直备受关注，似乎每个大佬级别的创业者都曾被拿出来谈论过。然而，相比之下，投资人的抑郁症相对较少被提及。虽然有一些投资人也曾经历抑郁症，比如徐小平，但他的抑郁症发生在创业时期，并且幸运地通过投资而得以治愈。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030133052.png"></p><p>那么，为什么投资人抑郁症的比例相对较低呢？本文将探讨这个问题，并揭示投资人面临的困境和挑战。</p><h2 id="失败：投资人的常态"><a href="#失败：投资人的常态" class="headerlink" title="失败：投资人的常态"></a>失败：投资人的常态</h2><p>和创业者不同，投资人的工作并不是以成功为导向的。实际上，据以色列一位知名LP的观察，95%的风险投资公司都无法赚钱。这意味着，大多数风险投资基金无法实现所需的回报率，即使有部分基金取得了2-3倍的回报，也有50%的基金回报低于1倍。</p><p>以全球顶级投资机构红杉资本为例，他们投资的600多个项目中，只有大约8.3%的项目通过IPO退出，20%被并购，剩下的大多数项目都是无法退出的亏损项目。类似的情况也出现在Accel和Benchmark等知名投资公司的投资组合中。</p><p>投资人永远面临着不确定性和风险，他们无法预测下一个雷会炸到什么地方。然而，投资人始终坚守信念：”只要人还在，没有什么能够搞垮我的心态。”他们知道，只要能投资到一个明星企业并在适当的时机退出，就能够撑起整个机构的亮眼业绩。然而，这样的机会并不常见。</p><h2 id="动荡：随波逐流"><a href="#动荡：随波逐流" class="headerlink" title="动荡：随波逐流"></a>动荡：随波逐流</h2><p>如果只有5%的风险投资公司能够赚钱，那么剩下的95%是如何生存下来的呢？</p><p>事实上，大多数风险投资公司并不依赖投资收益，而是主要依靠LP支付的管理费用。他们每年从已承诺的资本中收取2%的管理费用，例如1亿元的资本承诺，每年就能获得200万元的管理费用。此外，如果有公司进行清算，投资人还能额外获得20%的Carry。</p><p>即使投资机构无法给LP带来良好的回报，但他们确实在管理着资金，因此管理费用是必须支付的。然而，现在这个路线也变得有些紧张。最近，黑石为了吸引LP降低管理费用，将每年1.5%-2%的管理费用降低到了1.25%，并为LP提供六个月的费用减免期。类似的情况也出现在老虎环球基金、红杉美国等知名投资机构中，他们都提供低于行业标准2%的管理费用。</p><p>当顶级投资机构降低管理费用时，说明投资这碗饭已经变得困难。不仅美元基金面临困境，人民币基金也面临类似问题。行业的动荡直接影响到个人，被降薪、调岗或裁员，已经给投资人带来了巨大压力。同时，他们还需要承担DPI（多倍投资回报）的重压。</p><p>随着美元基金逐渐退出中国市场，能够投资的赛道也更加偏向于硬科技。因此，投资人不仅要适应人民币基金的投资风格，还需要关注赛道的转变，并在2023年的投资中做好招商工作。</p><h2 id="困难：牙齿打碎也要咽下去"><a href="#困难：牙齿打碎也要咽下去" class="headerlink" title="困难：牙齿打碎也要咽下去"></a>困难：牙齿打碎也要咽下去</h2><p>很多投资机构甚至在两三年内都无法募集到一分钱。大多数美元LP甚至连续3年都没有来中国。这些都让投资机构陷入困境。特别是对于小型投资机构来说，一方面，他们不符合国资的口味，另一方面，他们还要满足政府招商引资的需求。</p><p>这种矛盾让投资人头疼不已，因为他们不仅要面对同行的竞争，还要分一杯CVC（企业风险投资）的羹。近年来，LP直接投资的热情不断高涨，尤其是政府引导基金的LP，直投项目比例在2022年超过了40%。</p><p>投资人们在募集资金和退出投资项目之间面临巨大挑战。有时候，他们甚至不得不费尽心思才能抢到一点点投资额度。甚至有一些不光彩的故事传闻在行业内流传，比如一家母基金以尽调名义调查GP的项目，最后抢走了这些项目。</p><h2 id="快乐：选择的权利"><a href="#快乐：选择的权利" class="headerlink" title="快乐：选择的权利"></a>快乐：选择的权利</h2><p>虽然投资人的工作充满了挑战和困难，但大多数投资人的心态依然稳如老狗。</p><p>崩溃了，但没什么大不了的。</p><p>这是因为投资人一直以来都是享受生活的人，而不仅仅是孤独奋斗的人。相比于创业者，投资人更像是一叶扁舟，拥有自由的灵魂。</p><p>即使在艰难的日子里，他们也会通过各种方式寻找快乐。他们会打牌、玩德州扑克、狼人杀来放松身心，也会享用美酒佳肴，组织活动。据说，现在投资人们更喜欢一起跑步、健身、打篮球、踢足球、参加马拉松，女性投资人也喜欢瑜伽和普拉提。</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231030133244.png"></p><p>高瓴资本的张磊曾坦言：“通过运动和冥想，在极度聚焦中放空自己。”对于投资人来说，生活本身就是一种快乐，他们知道如何享受生活。</p><h2 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h2><p>投资人的抑郁症虽然不如创业者那样频繁被提及，但仍然是一个存在且被忽视的问题。投资人的工作充满了挑战和困境，但他们通过积极的心态和寻找快乐的方式来应对。</p><p>无论是面对投资失败还是募资困难，投资人都能够坚持下去。他们知道，只要保持积极的心态，享受生活，他们就能够战胜一切困难。</p><p>所以，让我们向那些默默奋斗的投资人致敬！他们的付出和努力让我们的创业生态更加繁荣。同时，也希望大家能够关注投资人的心理健康，给予他们更多的支持和理解。</p>]]></content>
    
    
    <summary type="html">创业者与投资人的抑郁症是一个值得关注的问题，因为它可能会影响到他们的决策能力、工作效率和人际关系。虽然创业者与投资人的抑郁症情况有所不同，但都需要得到及时的关注和治疗。</summary>
    
    
    
    <category term="Investors" scheme="https://www.nablepart.com/categories/Investors/"/>
    
    
    <category term="创业" scheme="https://www.nablepart.com/tags/%E5%88%9B%E4%B8%9A/"/>
    
    <category term="心态" scheme="https://www.nablepart.com/tags/%E5%BF%83%E6%80%81/"/>
    
  </entry>
  
  <entry>
    <title>There&#39;s still a group of players holding on to World of Warcraft collectible cards</title>
    <link href="https://www.nablepart.com/c8eba6671893/"/>
    <id>https://www.nablepart.com/c8eba6671893/</id>
    <published>2023-10-29T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/30/ouD8b7SfFE1kdJ2.png" alt="image.png"></p><p>Ten years after the game’s death, there’s still a group of players holding on to World of Warcraft collectible cards</p><p>Earlier this year, Blizzard closed its own Chinese Battle.net servers and withdrew from the Chinese market. World of Warcraft, Hearthstone Legends and other games also ceased operations, goodbye to Chinese players.</p><p>For Blizzard’s departure, there are remembrance, regret and abuse in the mouths of players. And kill the pill feel, it is time for the world of warcraft collectible card game (WOW TCG) to recruit a wave of new players.</p><p>In early October, Killmonger uploaded the WOW TCG Career Pack new cards Chinese resources in the bar, before that, he also shared a lot of experience in the set in the bar, in the forum released a post to introduce the WOW TCG.</p><p>Everything Killmonger did was to attract players who were forced to abandon Hearthstone Legends and those who miss World of Warcraft to try WOW TCG; or even in more ideal cases, to make them become WOW TCG players - as the owner of the WOW TCG Bar and the Warcraft Cards Bar, there is no one who looks forward to this old game more than him to be able to be revitalized.</p><p><img src="https://s2.loli.net/2023/10/30/8cT12F3rWYyiobd.png" alt="image.png"></p><p>Unfortunately, Killsumaru’s relentless publicity did not bring significant results. He set up the QQ group “Warcraft card network battle” these days only received a few sporadic applications to join the group, the bar is also still cold, posting back to the post is always a few familiar old faces.</p><p>Such a situation in the kill shot expected, after all, in the eyes of outsiders, WOW TCG is a early in the decade before Blizzard pronounced the death sentence of the game - however, the vast majority of people can not imagine, in the WOW TCG death of this decade, but there is still kill shot and a group of players like him, still in the ten years as one day playing the game. This game.</p><h2 id="WOW-TCG-was-born-three-years-after-the-release-of-World-of-Warcraft"><a href="#WOW-TCG-was-born-three-years-after-the-release-of-World-of-Warcraft" class="headerlink" title="WOW TCG was born three years after the release of World of Warcraft."></a>WOW TCG was born three years after the release of World of Warcraft.</h2><p>In 2007, as World of Warcraft continued to explode globally, Blizzard and UDE (Upper Deck Entertainment) collaborated to launch the WOW TCG, with Blizzard providing the IP, and UDE taking over the design of the card faces, the production of the cards, and the tournament operations. As for the business in China, it is fully handed over to Beijing Xinrui Zone Toys Company.</p><p><img src="https://s2.loli.net/2023/10/30/seI1hVOJUErxS7N.png" alt="image.png"></p><p>Collectible card games have always been a niche game in China, and although it is backed by the popularity of World of Warcraft, the number of players of WOW TCG has actually been very limited.</p><p>Killsumaru told me that WOW TCG players nationwide communicated with each other in a group called “WOW TCG Warcraft Card Game Community”, and even at the height of WOW TCG’s development, there were only about 400 people in the group, which was far less than the popularity of other popular TCGs today! – such as Pokémon cards.</p><p>In stark contrast to the scarcity of players, sales of WOW TCG cards are thriving.</p><p>As part of the linkage and promotion, Blizzard designed a coated card for the WOW TCG that could be redeemed for special in-game items and mounts with a code scratched out on the face of the card, which players referred to as a “scratch card”.</p><p>Players can purchase sets of cards and receive random scratch cards, which can be used to redeem rare items and mounts in the game with the code scratched out on the face of the card.</p><p><img src="https://s2.loli.net/2023/10/30/4t1h6OfFukvLE7W.png" alt="image.png"></p><p>The most expensive mounts in all of World of Warcraft come from these scratch cards - the Swift Ghost Tiger, which used to be jokingly referred to as “a square meter in Shanghai,” and the Magic Rooster, which is worth a Ford Focus - are only some of the most expensive mounts in the game, and they’re the ones that can be redeemed for the most expensive items in the game. The most expensive mounts in the game came from these scratch cards - the Swift Ghost Tiger, which used to be known as “Shanghai’s Square Meter”, and the Magic Rooster, which was worth a Ford Focus - all of which were cool mounts that could only be obtained from scratch cards, and were traded for more than five figures back then.</p><p><img src="https://s2.loli.net/2023/10/30/e6YWbIOzmDsG3dg.png" alt="image.png"></p><ul><li><strong>Swift Ghost Tiger, which sold for as much as 70,000 RMB</strong></li></ul><p>The only way to get a mount is to draw a card, which attracts many World of Warcraft players to spend a lot of money to “hold the box”, and whenever a player opens a box in a card store, it always attracts a large number of onlookers to stop by. The scene is like a casino, some people because of a small card fell head over heels, and some people therefore earn a lot of money.</p><p>The price of each rare mount stimulates people’s eyeballs and further heats up the card-opening atmosphere, except that no one pays attention to the content of the cards, and opening the cards itself becomes a kind of lottery game for adults.</p><p>The frenzy of card removal is not all bad for WOW TCG, at least players can buy the cards they need at low prices and participate in tournaments; agents at all levels are making profits and will continue to invest; Blizzard is making money from it, and on the surface, everything seems to be thriving.</p><p>Like the vast majority of people, Killmonger knew about the existence of WOW TCG under such circumstances. However, the difference is that he prefers WOW TCG itself to scratch card mounts.</p><p>After six months of learning on MWS, a free online battle platform, Killmonger formally entered the embrace of WOW TCG.</p><h2 id="In-Killsumaru’s-recollection-he-and-WOW-TCG-had-a-rather-“sweet”-time"><a href="#In-Killsumaru’s-recollection-he-and-WOW-TCG-had-a-rather-“sweet”-time" class="headerlink" title="In Killsumaru’s recollection, he and WOW TCG had a rather “sweet” time."></a>In Killsumaru’s recollection, he and WOW TCG had a rather “sweet” time.</h2><p>“Mainly because there were official tournaments,” Killsumaru explains. Backed by the popularity of World of Warcraft, WOW TCG has quickly built up a loyal player base since its launch, which has laid the foundation for it to organize tournaments around the world.</p><p>China is no exception. Sunrise Zone Toys hosts all domestic tournaments, holding a Darkmoon Faire in each of the spring and fall seasons, and a National Championship and Annual Championship each year. And the tournament prizes are generous, attracting many masters in the card game and other card games to become “bounty hunters”, but also to a certain extent to improve the level of competition in the WOW TCG.</p><p>In Killsumaru’s opinion, this is the biggest charm of WOW TCG. Until now, he still often recalls the days when his friends used to play tournaments together, Killmonger told me: “The 2013 National Championship was the most lively, with as many as 400 participants, it seemed that everything was flourishing, but I didn’t think that it was the last show.”</p><p>In 2013, the official news from Blizzard came out of the blue that WOW TCG stopped shipping new cards and would no longer continue to maintain tournament operations. The carefully constructed decks in the hands of the players and the boxes of cards in the agent’s warehouse were instantly turned into waste paper, and a bolt of lightning struck everyone’s head like a bolt of lightning in the clear sky.</p><p>For the people at that time, this moment was really hard to predict, after all, not only the domestic tournaments were in full swing, but also the world, tournaments were being held all over the world. As the highest honor of WOW TCG, the World Championship is held every year in the U.S., and only players who have accumulated a certain number of points are eligible to sign up for it. WOW TCG players all over the world are all eager to participate in this arena, and look forward to the day when they will be able to compete with the world’s elite players on the same stage.</p><p><img src="https://s2.loli.net/2023/10/30/deD4xymN3HCZOBs.png" alt="image.png"></p><ul><li><strong>2010 WOW TCG World Championship</strong></li></ul><p>Players were still looking forward to bigger tournaments, stronger opponents, richer decks and higher podiums, but on an ordinary summer day, everything came to an abrupt end.</p><p>At the end of the same year, Hearthstone Legends, a new Blizzard game represented by NetEase, opened for national service testing, and many of the cards in the game were directly used in the WOW TCG card faces.</p><p><img src="https://s2.loli.net/2023/10/30/SInYfx2Bz69ayHP.png" alt="image.png"></p><p>The two were not in conflict, just like Magic Online, which is an electronic version of Marvel, and Wisdom adds each new series to the game.</p><p>But the WOW TCG’s lack of success has made it hard to convince players that it’s completely unrelated to Hearthstone Legends.</p><p>There have been many “Hearthstone killed the WOW TCG” speculations that have erupted in the player community. One of the widely circulated: the reason why WOW TCG ceased operation, because the Legend of Hearthstone to use the original painting of WOW TCG, if not shut down the WOW TCG, Blizzard will cause legal disputes with the painter.</p><p>But in fact, the two games have played the player should be able to understand, “Legend of Hearthstone” and WOW TCG game rules are very different, the two can not be like Marvel and Magic Online as parallel operation, to do changes to any of them will only give Blizzard futile trouble.</p><p>However, the sales of WOW TCG began to decline in the late stage, and could not bring Blizzard a huge amount of income again, and it seems that it is no longer worthwhile to bother to run a separate business.</p><p>Ultimately, Blizzard shut down WOW TCG for one reason - Dad thought it was time for you to put down this old physical deck and play the new Hearthstone Legends.</p><p>For commercial reasons, WOW TCG was abandoned. the large group of over 400 people buzzed for a while, and like earlier this year, there were remembrances, lamentations, and tirades, and then the voices got smaller and smaller, until now no one is talking anymore.</p><p>Ten years after the WOW TCG went on hiatus, all the buzz from that year has gone cold. Although Killmonger also believes that the Legend of Hearthstone created the collapse of WOW TCG, but before Blizzard left China, he will still occasionally play two games of the Legend of Hearthstone.</p><p>One of Killsumaru’s favorite WOW TCG cards has been remade for Hearthstone, only his name is no longer Blade Dance Darth Vader, but Drew Korfe, a Producer card that can’t be manipulated by players.</p><p><img src="https://s2.loli.net/2023/10/30/o5HVOmja38ryxIq.png" alt="image.png"></p><p><strong>Speaking of the changes, Killmonger says he just feels nostalgic now:</strong></p><ul><li>“Some well-known characters through this form of obscene earth reincarnation, at first a little sad, after all, a well-played game suddenly said no, and then gradually also look down, see the old Warcraft card deck card face, feel very nostalgic.”</li></ul><h2 id="WOWTCG-REBORN-Warcraft-Cards-Reborn"><a href="#WOWTCG-REBORN-Warcraft-Cards-Reborn" class="headerlink" title="WOWTCG REBORN (Warcraft Cards Reborn)"></a>WOWTCG REBORN (Warcraft Cards Reborn)</h2><p>When Killmonger introduced me to WOWTCG REBORN (Warcraft Cards Reborn), I realized that his name was hanging out at #4 on the REBORN Global Tour leaderboard.</p><p><img src="https://s2.loli.net/2023/10/30/RSJGxVpjqfF154W.png" alt="image.png"></p><p>In 2019, a group of foreign players started designing new versions for WOW TCG, drawing new card faces and adding new heroes, and they called the new cards they designed REBORN and uploaded them to a website they set up, so that players could download and print them on their own, or import them for use in the online matchmaking platform MWS.</p><p>To date, the REBORN team has updated the WOW TCG with two major releases, one large copy, and a smattering of career packs. Although the team has not disclosed the composition of its staff, it is assumed that there are Chinese players among them based on the card designs and names of the artists.</p><p><img src="https://s2.loli.net/2023/10/30/gTp1GqIFdLnxVt3.png" alt="image.png"></p><p>Killmonger saw the emergence of WOW TCG REBORN as a silver lining to save WOW TCG. After obtaining the authorization, he began to bring REBORN cards back to China, and Constellation Hao, who had participated in the Chinese version of WOWTCG REBORN, was in charge of translating the text, while Killer Maru was in charge of inlaying the characters and packing the PSD files, which were then uploaded to the posting site. </p><p>What makes him feel happier is that the REBORN team not only updates the new version of WOW TCG, but also organizes online tournaments once a month, where information about the tournament and the top players’ decks are updated on the website, and he spares no effort to Chineseize the deck information and bring it back to the bar, in order to provide more references for domestic players.</p><p>When I exclaimed that Kashizumaru should have put in a lot of time and effort into the Chinese version of the work, he just said “it’s okay”, in his view, Chinese version is equivalent to his own carefully read through the entire list of cards, more convenient to conceptualize a set of plans, which is not only to help other people, but also to understand the environment in advance.</p><p>As for what supports his ten years of perseverance in the WOW TCG, Killsumaru himself could not find a clue, and finally he summarized, “It’s still because of the love for World of Warcraft.” In Killmonger’s description, World of Warcraft brought him so many good memories that he later played Marvel, Yu-Gi-Oh, Pokémon, and many seven or eight other card games, but WOW TCG is still unforgettable to him.</p><p><img src="https://s2.loli.net/2023/10/30/2Ne4qGPJC9FwxRh.png" alt="image.png"></p><ul><li><strong>Killsumaru’s World of Warcraft account</strong></li></ul><p>In the end, Killmonger interrupted his World of Warcraft and WOW TCG memories with a sigh, “If Warcraft cards hadn’t died, I would have kept playing, but history doesn’t have ifs.”</p><p>However, in fact, there are many more saddening, such as whether it is the Swift Ghost Tiger or the Magic Rooster, these once tens of thousands of dollars worth of out-of-print mounts, in the operation of the “World of Warcraft” over the years, was sold in the form of a variety of rechargeable goodies, mall gift packs many times, has long since ceased to be an out-of-print rare mounts.</p><p><img src="https://s2.loli.net/2023/10/30/7sbzIUH8WrLuJEe.png" alt="image.png"></p><p>For example, whether it is the year to exchange the sky-high price scratch card mounts, or in recent years the mall to buy the “cheap” mounts, has now been put into the same “electronic urn”, I do not know whether it can still usher in the day of the sun.</p><p>But the WOW TCG, which once depended on the popularity of World of Warcraft and the value of rare mounts, has shown a calm, natural and thriving posture after the carnival has receded. Killmonger told me that until now, players in many regions still organize their own offline tournaments.</p><p>Interestingly, while Blizzard’s games have completely withdrawn from China, there is a group of players who are still holding on to a game that died a decade ago, but will never go away.</p>]]></content>
    
    
    <summary type="html">Ten years after the game&#39;s death, there&#39;s still a group of players holding on to World of Warcraft collectible cards</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="World of Warcraft" scheme="https://www.nablepart.com/tags/World-of-Warcraft/"/>
    
    <category term="Collectible Cards" scheme="https://www.nablepart.com/tags/Collectible-Cards/"/>
    
    <category term="Game" scheme="https://www.nablepart.com/tags/Game/"/>
    
    <category term="Hearthstone Legends" scheme="https://www.nablepart.com/tags/Hearthstone-Legends/"/>
    
  </entry>
  
  <entry>
    <title>From climbing to running - why do we need unit tests?</title>
    <link href="https://www.nablepart.com/832f48284b6e/"/>
    <id>https://www.nablepart.com/832f48284b6e/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h3 id="♪-Ex-language"><a href="#♪-Ex-language" class="headerlink" title="♪## Ex-language"></a>♪## Ex-language</h3><p>**Does braking reduce or increase speed? **We usually think of writing single tests as a power move, delaying development progress, as if we are “putting the brakes on” the project. Massive Home Shapes takes this question and looks down the road to talk more about why unit testing can make software development run faster.</p><h4 id="What-is-unit-testing"><a href="#What-is-unit-testing" class="headerlink" title="What is unit testing"></a>What is unit testing</h4><p>Mass home for a single test should not be unfamiliar, intercept a Wikipedia definition to help the masses to wake up a memory:</p><p>In programming, a Unit Test, also known as a Module Test, is a testing exercise that performs computer correctness checks against program modules (<strong>the smallest unit of software design</strong>).</p><p>The idea of unit testing has always been part of Smashing. When we write a computer program for the third time, we will definitely output some sample data outputs and intersperse a huge amount of System.out.println in the code to make sure that each sub-node meets the expectation. This process is actually the process of breaking down a complex problem into sub-problems and breaking them down one by one. The purpose of unit testing is the same, to **guarantee the correctness of the smallest unit of a software program, so that the correctness of the complex system built from that smallest unit can be guaranteed. **.</p><p>To get a deeper understanding of the need for unit testing, let’s take a look at how testing systems have evolved before going to Kauai.</p><h4 id="Test-System-Evolution"><a href="#Test-System-Evolution" class="headerlink" title="Test System Evolution"></a>Test System Evolution</h4><p>! <a href="https://pic3.zhimg.com/80/v2-4927197f57105727cacf2d54360c60de_720w.webp"></a></p><p>Software testing has even been a stereotypical job (QA, tester) in the past, where the daily task of the QA&#x2F;tester was often to perform the largest number of manual tests, tedious and error-prone.</p><p>Automated testing in the software industry has changed dramatically since the early 2000s. To cope with the size and complexity of modern software systems, developer-driven automated testing practices have evolved. It is possible to get rid of the tedium of manual testing and utilize software to test software. However, the practices of the past still leave their inevitable impact, software testing is still a homogenous workforce, the QA of the past has evolved into the SDET (Software Development Engineer in Test)), and although we have evolved to be able to use tools, we are still just monkeys who know how to use them. Why? Because this model of R&amp;D&#x2F;test separation itself leaves a lot of problems.</p><p>**When R&amp;D and testing are two positions, the boundary of delivery is the overall functionality (functional requirements) and usability of the software. R&amp;D only needs to ensure that the software as a whole is functionally complete and usable, and testing also focuses on integration testing and end-to-end testing. However, software is composed of countless small units, and under this system, people will focus on the quality of the smallest unit, whether it is the only one that can be tested and evolved, which will inevitably result in the “ultimate in the end, but in the middle of the failure”.</p><p>Based on the various shortcomings, Song, Microsoft and other companies that attach great importance to the quality of R &amp; D are transitioning from the 2.0 era of **SDET to the 3.0 era of integration: **Microsoft removed SDET in 2015, and took the lead in proposing the concept of “Combined Engineering” in the Bing transformed by Luqi; Song also replaced SETI with EngProd (Engineering), which is the first of its kind. The concept of “combined engineering” was first introduced in Bing, which was revamped by Luqi; SETI was also replaced by EngProd (Engineering Productivity), which specializes in the construction of test platforms and tools, and is not responsible for the specific business logic testing.</p><h3 id="Why-unit-testing-is-needed"><a href="#Why-unit-testing-is-needed" class="headerlink" title="Why unit testing is needed"></a>Why unit testing is needed</h3><p>In today’s Internet era, the speed of software iteration is getting faster and faster, and the responsibilities of R&amp;D are also getting more and more, and the concept of DevOps is “you build it, you run it”, and the trend of R&amp;D&#x2F;testing being a two-dimensional unit can be interpreted as the trend of “<strong>you build it, you test it</strong>“. You build it, you test it**”. When R&amp;D is responsible for the quality and testing of automated code, good testing practices are essential.</p><h4 id="The-Tower-of-Testing"><a href="#The-Tower-of-Testing" class="headerlink" title="The Tower of Testing"></a>The Tower of Testing</h4><p>Just as a building is built from the foundation up, waterproofed and flooded, testing has a similar testing pyramid structure. The diagram below from the Testing chapter of Software Engineering at Google summarizes Google’s best practices for one-day testing. We can see that the testing pyramid consists of three layers. The bottom layer is unit testing, which accounts for 80% of the total weight and is the foundation of a software system. Further up are integration testing and final testing, which account for 15% and 5% respectively. Because of the decreasing contribution from the bottom to the top, it is called the Tower of Testing (similar to the construction of high-rise buildings). This ratio is recommended by Kenge as a result of many years of practice, and is intended to improve the efficiency of R&amp;D (productivity) and enhance product faithfulness (product confidence).</p><p>One of the core concepts of the Pyramid Tower is <strong>Unit Test First</strong>, where the first test item in every software project should be a single test** (TDD even believes that the first piece of code should be a single test), and the highest-weighted test in the only project should also be a single test.</p><p><a href=""></a> <a href="https://pic3.zhimg.com/80/v2-db2743517a9f5efe25f291b209eaf40a_720w.webp"></a></p><h4 id="Good-Software-Unit-Testing"><a href="#Good-Software-Unit-Testing" class="headerlink" title="Good Software Unit Testing"></a>Good Software Unit Testing</h4><p>What is the importance of unit testing in the industry? “Isn’t it better to write only end-to-end tests? That’s why here we’ll expand on the benefits of unit testing.</p><p>**Enhanced Debugging Efficiency</p><p>Unit tests are an excellent foundation for software processes because they are fast, stable, and dramatically compress the scope of the problem, increasing the efficiency of troubleshooting.</p><ul><li><strong>Tests are faster:</strong> Unit tests have no other external dependencies and run fast, providing a faster feedback loop to find and fix problems faster.</li><li><strong>Testing is more stable:</strong> Again, because of the 0 dependency, single testing is more stable than other types of testing, and will not be affected by incompatible changes to other external modules. Therefore, single test is also the type of test that can lead to developers’ carefulness**. **</li><li><strong>Problems are easier to locate:</strong> Single tests are bounded by the largest small software unit, so growth problems can be narrowed down for localization. By contrast, the higher up the pyramid of test types, the more difficult it is to locate problems. Complex end-to-end testing involves groups of modules that require consistency queues to check and locate problems.</li></ul><p>**Improving Code Quality</p><p>Code is written for others to see, good code should be easy to read, easy to change, easy to maintain. **The process of writing a single test is actually the process of eating autonomy and its own code dogfood (dogfood), and using autonomy and its own code **from the user&#x2F;R&amp;D vista **helps us to improve the quality of the code.</p><ul><li><strong>Good code is easy to measure:</strong> The concept of Cyclomatic Complexity was introduced early in the market to simplify the complexity of a module’s alternative structure by quantifying the number of independent paths to walk, which can also be interpreted as the number of test cases that cover all possible scenarios at least for the user. Circle complexity is a great indication of the complexity of the judgment logic of the program code, which may be of low quality and difficult to test and maintain. Therefore, the code is good for a thing must be low circle complexity, but also easy to test.</li><li><strong>Modify Iterative Evolution:</strong> No software is unchanging, a good software system should be easy to evolve. Single tests cover more all the project modules are more original, have clearer boundaries, and are easier to get up. The risk of rebuilding a project with a higher single-test coverage is also relatively small, exactly ** a complex project without a single-test coverage is something no one dares to touch. **</li><li><strong>Better design:</strong> As mentioned earlier, a good single test can improve the quality of the code. If a certain R&amp;D needs to write a single test for his own code, he is concerned about the hierarchical division of the code, reducing the excessively long, circle complexity to achieve a higher method. The following example is the cognitive complexity value of a piece of code without a single test (which can be understood as a modified version of the circle complexity, simple from the point of view of whether the code is easy to understand or not), exceeding the standard by a factor of 3 to a large extent. Now that I look back and try to fix the single test, my brain is maxed out.</li></ul><p>! <a href="https://pic1.zhimg.com/80/v2-58e5b118d505a32d0bd0369a6451b5b4_720w.webp"></a></p><p><strong>Improve overall R&amp;D efficiency</strong>*</p><p><strong>Single testing that improves quality and perfects speed can improve the quality and efficiency of R&amp;D and speed up the overall delivery of the project goal. This statement is counterintuitive at first glance, as writing a single test is often more important than writing implementation logic</strong>, which is also the most common reason for most companies not to write a single test: “the project is in a hurry, so it’s too late to write a single test”. If our project’s ecological life cycle is calculated in months, write a prototype soon after the line, then write a single test does not return on investment does not improve. **But Ari has a lot of to B business, providing users with the power of the life cycle are calculated in years, improve the quality of the code over time, the return on investment will be increasingly improved **, specifically in the following areas:</p><ul><li><strong>Reduced debugging time:</strong> The reasons for improved debugging efficiency mentioned above were mentioned above and will not be repeated here. The time spent on debugging can be saved by having a higher single-test coverage, and the number of bugs in the project itself will be lower when there is sufficient test coverage. Let’s take a real-life example: a team, due to historical debts, basically relies on final tests and has no unit test coverage. The consequence is very serious, the team oncall students &gt; 50% of the time are fixing all kinds of strange bugs, can not invest valuable strength to architectural upgrades and other long-term more important projects.</li><li><strong>Increased awareness of code changes:</strong> As I mentioned earlier, no one dares to touch code without test coverage, **Code with sufficient single-test coverage can significantly increase **Awareness and desire to remodel the code. To give you another example: I got my ADR almost ten years ago when I worked at Google headquarters. If you’ve ever worked at Google, you’ve noticed that your code often receives code changes initiated by unrelated team members. In the vast majority of cases these are mass refactors developed by fellow students, such as helping to refactor your Java code if it doesn’t use the Builder pattern (these refactorings are simplified by the mass automation tools available at Google). We desperately want to leave aside the fact that if the code is not covered, or if there is no relevant group, would you dare to refactor it that way?  **</li><li><strong>Enhance the autonomy of code:</strong> Text files can enhance the autonomy of code and make development more efficient. A good single test can actually be elevated to the text file of the code, by reading the test can quickly understand the use of the code (cf.) ⻅ TDD). Single test as a text file at the same time also perfectly solves the problem of text file freshness, giving developers a set of high-grade quality, with the code constantly updated text file.</li><li><strong>More efficient code review:</strong> Not all problems and design flaws are found through static inspection, which is why human code review is needed as the last line of defense for code quality. At Google, code review is one of the most important phases of code consolidation, so the efficiency of the review has a direct impact on the overall development efficiency. Good single-test coverage can reduce the burden of reviewers, allowing them to focus their efforts on more important parts (e.g. code design).</li><li><strong>More frequent development releases:</strong> Agile development’s premise of continuous integration and continuous deployment is full-feast, quality-enhancing self-initiated testing. The efficiency gains of agile development for R&amp;D have yet to unfold. But just being able to develop versions more quickly is already quite valuable.</li></ul><h3 id="Negative-Patterns-and-Common-Misconceptions"><a href="#Negative-Patterns-and-Common-Misconceptions" class="headerlink" title="Negative Patterns and Common Misconceptions"></a>Negative Patterns and Common Misconceptions</h3><p>Above, we mentioned a number of benefits of writing unit tests and related best practices. We’ve also listed down the common counter-patterns and misconceptions to help most families better avoid similar mistakes.</p><h4 id="Antipatterns-of-testing-anti-patterns"><a href="#Antipatterns-of-testing-anti-patterns" class="headerlink" title="Antipatterns of testing (anti-patterns)"></a>Antipatterns of testing (anti-patterns)</h4><p><strong>Anti-pattern #1: Ice cream cone pattern</strong></p><p>End-to-end testing that focuses only on user-household visibility, and massive reliance on QA testing all produce the anti-pattern shown below. Unfortunately, this is also the most common pattern under the influence of past testing systems. Ice Cream Cone** Patterns Next, test suites typically run slowly, unreliably, and are difficult to use. **</p><p>! <a href="https://pic4.zhimg.com/80/v2-c99708f261a64544845fb749b43ba6d7_720w.webp"></a></p><p>Photo credit: Google Software Engineering</p><p><strong>The opposite mode two: Hourglass mode</strong></p><p>In Hourglass Mode, the project has tons of unit tests and telomere tests, but lacks integration tests. While it’s not as bad as ice cream, it still leads to many telomere test failures that could have been captured faster and easier with a medium-sized set of tests for the same thing. The hourglass pattern occurs when modules are tightly connected to each other, making it difficult to instantiate dependencies individually.</p><p>! <a href="https://pic3.zhimg.com/80/v2-401c712cd94473865523a406e40395ba_720w.webp"></a></p><p>Photo credit: Google Software Engineering</p><h4 id="Common-Error-Areas-in-Testing"><a href="#Common-Error-Areas-in-Testing" class="headerlink" title="Common Error Areas in Testing"></a>Common Error Areas in Testing</h4><p>**Common Misconception #1: User first, testing covers the user’s needs sufficiently</p><p>The misconception is that the ultimate test is to test from the user’s perspective, and that it is sufficient to cover all the functionality that the user wants. The result of this misconception is the ice cream cone reverse model. While the final functionality of software delivery is provided to the customer-user, the code that makes up the software itself is provided for the user (R&amp;D) to read and needs to be maintained by the user. External users are users and internal users are users**. **</p><p>**Frequent error area two: full testing, saving 80% of the amount of test code, win hemp **</p><p>In the short term, not writing a single test can save 80% of the amount of test code and at least 50% of the development time. However, as soon as the project becomes complex, the time** doubles sooner or later. **If you wait until you really need to pay off the debt, it may be too late.</p><p><strong>Myth #3: People who write single tests are weak, I have never written a bug</strong>!</p><p>This article may not be for you. However, software development is a team project, the code you write ends up in the hands of others to upgrade maintenance, no test coverage of the code is no one dare touch.</p>]]></content>
    
    
    <summary type="html">From climbing to running - why do we need unit tests?</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Full Scenario Traffic Verification System</title>
    <link href="https://www.nablepart.com/9c6365feabd2/"/>
    <id>https://www.nablepart.com/9c6365feabd2/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>This paper introduces a practical solution to verify the functionality and performance of a reconfigured system based on online traffic. Detailed descriptions of how to intercept, record, store, playback, and pressurize online traffic are provided to provide a reference idea for readers with similar needs.</p><h2 id="1-Business-Background"><a href="#1-Business-Background" class="headerlink" title="1 Business Background"></a>1 Business Background</h2><p>With the start of the BCinks project, the middle office needs to close the order flow and switch all the order taking entrances of ECLP and BP to the BCinks unified order taking system. And the invocation of the various order taking entrances is different, there are JOS requests (external merchants), JSF requests (such as TC), there are also MQ asynchronous messages (such as POP). In order to ensure that each system smoothly cut the volume and minimize the risk of cutting the volume, it is necessary to do sufficient traffic verification (including functional verification and performance verification) before cutting the volume. To this end, a full-scenario traffic verification system is designed to support AB verification (functional verification) and pressure testing (performance verification) based on online traffic, which provides a reliable basic support for each business line to take orders and cut the volume.</p><h2 id="2-Explanation-of-Terms"><a href="#2-Explanation-of-Terms" class="headerlink" title="2 Explanation of Terms"></a>2 Explanation of Terms</h2><ul><li>Flow diversion: Introducing the online traffic of the system where each order taking portal is located to the flow verification system.</li><li>Record: Copy the online traffic and make persistent storage.</li><li>Playback: Play the recorded traffic to the system to be verified.</li><li>Cutting: Switch the order taking traffic from the old order taking system such as ECLP to the new BCinks unified order taking system.</li><li>AB Verification: The online traffic hits the formal environment and AB environment at the same time, and the results of the two environments are compared and analyzed to verify the correctness of the AB environment.</li></ul><h2 id="3-Design-Ideas"><a href="#3-Design-Ideas" class="headerlink" title="3 Design Ideas"></a>3 Design Ideas</h2><p>** How to attract traffic? **<br>Traffic diversion can be realized by introducing traffic agents in the business system.</p><p>** How to record? **<br>Considering the need to support large data volume as well as composite queries, ES is chosen as the persistent storage solution.</p><p>** How to playback? **<br>In order to avoid dependence on Jar packages of various business systems, we choose to use JSF generalized calls to achieve traffic playback.</p><p>** Is there a similar system available? **<br>Moonlight Box (jcase): a traffic recording and playback system developed by Jingdong Retail. It supports traffic recording and playback functions, but it does not meet some personalized needs, such as recording according to custom business rules, cut volume control, and so on.</p><h2 id="4-System-Design"><a href="#4-System-Design" class="headerlink" title="4 System Design"></a>4 System Design</h2><h2 id="4-1-Overall-design"><a href="#4-1-Overall-design" class="headerlink" title="4.1 Overall design"></a>4.1 Overall design</h2><p>Traffic agent: diverts traffic to the verification system through interception, filtering and reporting.<br>Recording Service: Receive the online traffic introduced by the traffic proxy and make persistent storage.<br>Playback engine: uses the recorded online traffic to request the target interface to be verified.<br>Pressure testing engine: uses the recorded online traffic to send multi-threaded pressure to the target interface to be verified.</p><h3 id="4-2-Detailed-Design"><a href="#4-2-Detailed-Design" class="headerlink" title="4.2 Detailed Design"></a>4.2 Detailed Design</h3><h4 id="4-2-1-Traffic-Proxy"><a href="#4-2-1-Traffic-Proxy" class="headerlink" title="4.2.1 Traffic Proxy"></a>4.2.1 Traffic Proxy</h4><ol><li>Generic traffic proxy</li></ol><p>Introduce traffic proxy in the business system, intercept (JSF Filter or AOP) online traffic through the traffic proxy, and report the traffic to the recording service through asynchronous MQ for persistent storage.</p><ol start="2"><li>JOS Traffic Proxy</li></ol><p>External merchants call the JOS platform via HTTP, and the JOS platform internally transfers to JSF to call the order taking service. In order to make external merchants senseless, publish a JSF service (virtual service) with exactly the same interface as the business system, the difference is to provide a new alias, switch to the new alias through the JOS platform configuration, so that the JOS traffic is introduced to the recording agent, and then the recording agent reports the traffic to the recording service for persistent storage through the asynchronous MQ method.</p><h4 id="4-2-2-Traffic-Storage"><a href="#4-2-2-Traffic-Storage" class="headerlink" title="4.2.2 Traffic Storage"></a>4.2.2 Traffic Storage</h4><p>Recorded traffic is persistently stored to ES, and recording tasks are created in accordance with the [interface:method] dimension, and the primary keys of records under the same recording task are all prefixed with the recording task number, followed by a numeric increment, and the maximum suffix (cached in Redis) is the total number of records recorded under that recording task.</p><table><thead><tr><th><strong>Attribute name</strong></th><th><strong>Example value</strong></th><th><strong>Example value</strong></th></tr></thead><tbody><tr><td>id</td><td>RT7625109167934456_1</td><td>Primary key identifier</td></tr><tr><td>recordData</td><td>{“args”:[{“fakeNo”: “fakeNo001”}], “argsType”:[“cn.jdl.baichuan.router.replay.contract.domain.fake.FlowFakeRequest”]] , “attachments”:{“traceId”: “8112206384546625”, “type”: “1”}, “clazzName”: “cn.jdl.baichuan.router.replay.contract.service. RouterFlowFakeService”, “methodName”: “match”, “resultObj”:true}</td><td>Recorded body</td></tr><tr><td>recordTaskNo</td><td>RT7625109167934456</td><td>Recorded task number</td></tr><tr><td>timestamp</td><td>1636719778929</td><td>time stamp</td></tr></tbody></table><h4 id="4-2-3-Traffic-Playback"><a href="#4-2-3-Traffic-Playback" class="headerlink" title="4.2.3 Traffic Playback"></a>4.2.3 Traffic Playback</h4><p>Support single, batch, and batch playback by recording task dimension. The playback call adopts JSF generalized call mode, avoiding the dependence on the Jar package of the business system.</p><p>At the same time of traffic playback, it supports the configuration of comparison services, comparing the service reception input parameters and the output results of the old and new interfaces, and it can compare and analyze the processing results of the old and new interfaces, so as to verify the correctness of the new interface functions.</p><h4 id="4-2-4-Traffic-Pressure-Measurement"><a href="#4-2-4-Traffic-Pressure-Measurement" class="headerlink" title="4.2.4 Traffic Pressure Measurement"></a>4.2.4 Traffic Pressure Measurement</h4><p>In order to realize the effect of sending pressure, it is necessary to use multi-machine and multi-thread concurrently to request the target interface. However, multiple machines and threads share the same recorded data as the pressure data source. Therefore, before actually sending pressure, it is necessary to assign the data for each execution thread, and each thread only takes its own data without interfering with each other.</p><p>Pressure generation strategy (master-slave architecture, master allocation, slave execution)</p><p>Pressure test engine adopts master-slave architecture, the press is divided into master and slave nodes, the master node is responsible for receiving pressure test requests and assigning pressure test tasks; the slave node is responsible for executing pressure test tasks.</p><p>Data allocation strategy (average by volume, residual polling, sliding window)</p><ol><li>Calculation Window</li></ol><p>According to the total amount of recordings in the recording task, it is evenly distributed to each thread, and the remainder is then distributed to each thread by polling until it is finished, so that the number of recordings allocated to each thread can be determined (the size of the window);</p><ol start="2"><li>Slide by window</li></ol><p>Tile all the recording tasks horizontally from left to right, each thread according to its own window size from left to right in order to occupy the recording records.</p><h2 id="5-Business-Practice"><a href="#5-Business-Practice" class="headerlink" title="5 Business Practice"></a>5 Business Practice</h2><h2 id="5-1-Cut-Volume-Verification"><a href="#5-1-Cut-Volume-Verification" class="headerlink" title="5.1 Cut Volume Verification"></a>5.1 Cut Volume Verification</h2><p>Take the switching of POP order taking interface for warehousing and distribution as an example, we need to replace the original ECLP-SO system with a new order center. Before the switchover, the ECLP-SO system will still provide online order taking service, but at the same time, it will record the online traffic through the traffic verification system and put it back to the new order center. The order taking function of the new order center will be verified by comparing the results of the old and new systems on the same order taking request. Only after sufficient functional verification will the order taking traffic be switched to the new order center, thus greatly reducing the risk of volume cutting.</p><h3 id="5-2-Requirement-Iteration"><a href="#5-2-Requirement-Iteration" class="headerlink" title="5.2 Requirement Iteration"></a>5.2 Requirement Iteration</h3><p>The product verification service is a core interface provided by the product center to the outside world, and the interface logic is complex, so every demand iteration is a great challenge to go online. Even after verification in the test environment and pre-release environment, there is still no 100% guarantee that there will be no impact on the online business after going live. After all, the test environment, pre-release environment verification request parameters are single and limited, and cannot reflect the diversity and complexity of online requests. Therefore, the product center accessed the traffic verification system, each time there is a new demand iteration before going online, the first recording of online traffic, using the real online traffic in the pre-release environment for full verification before doing online operations. This greatly reduces the chance of damage to the online business due to inadequate verification, provides a layer of security for the online business, and improves the stability of the online system.</p>]]></content>
    
    
    <summary type="html">Full Scenario Traffic Verification System</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Hangzhou Asian Games realizes 100% of core systems in the cloud.</title>
    <link href="https://www.nablepart.com/7e30954f0948/"/>
    <id>https://www.nablepart.com/7e30954f0948/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>**Hangzhou Asian Games realizes 100% of core systems in the cloud. **</p></blockquote><p>On the evening of October 8, the Hangzhou Asian Games concluded successfully.</p><p>As the first ever “Asian Games on the cloud”, the Hangzhou Asian Games created history. The Hangzhou Asian Games realized 100% of the core system on the cloud, and with the background of cloud computing power, cloud storage and other cloud technology to ensure the construction of a series of various levels, each venue digital command platform, to achieve a comprehensive perception, efficient command.</p><p>At the same time, for the first time to realize the broadcast on the cloud, according to statistics, the Hangzhou Asian Games in the cloud transmission of 60 high-definition and ultra-high-definition signals, a total of more than 7200 hours of time.</p><p>! <a href="https://p4.itc.cn/images01/20231012/196ecca958084c0c8b0e06da63b9a207.png"></a></p><h2 id="100-of-the-core-system-on-the-cloud-the-event-results-released-in-5-seconds"><a href="#100-of-the-core-system-on-the-cloud-the-event-results-released-in-5-seconds" class="headerlink" title="100% of the core system on the cloud, the event results released in 5 seconds*"></a><strong>100% of the core system on the cloud, the event results released in 5 seconds</strong>*</h2><p>The Hangzhou Asian Games is the first ever Asian Games on the cloud, realizing 100% of the core system on the cloud, and with the help of backstage cloud computing power, cloud storage and other cloud technology to ensure the construction of a series of digital command platforms at all levels and in all venues, to achieve a comprehensive perception and high efficiency command.</p><p>Based on the cloud base, the Hangzhou Asian Games comprehensively adopts cloud native technology, and the core system group of the event realizes system pass and data pass. This innovative cloud technology program avoids the traditional mode of each “building chimneys”, the data is difficult to interoperate “difficult” problem, can better support the rich intelligent applications.</p><p>The three core system groups safely converge the event information such as registration management, competition registration and event results on the cloud computing base, and unify the output through the application programming interface, file transfer and other interfaces, realizing the accurate and instant output of all kinds of data with one click. After the referee confirms the results, the event information can be released in 5 seconds, which is a new type of fast and practical technology for the release of results in the global comprehensive games.</p><p>! <a href="https://p7.itc.cn/images01/20231012/ce5d7fad3e4e409b95d30b4e7c332437.jpeg"></a></p><p>Scoreboard on the field</p><h2 id="7-200-hours-of-broadcasting-on-the-cloud-watching-the-games-from-multiple-angles-and-in-higher-definition"><a href="#7-200-hours-of-broadcasting-on-the-cloud-watching-the-games-from-multiple-angles-and-in-higher-definition" class="headerlink" title="7,200+ hours of broadcasting on the cloud, watching the games from multiple angles and in higher definition*"></a><strong>7,200+ hours of broadcasting on the cloud, watching the games from multiple angles and in higher definition</strong>*</h2><p>The Hangzhou Asian Games was the first Asian Games to be broadcast on the cloud, with 60 HD and Ultra HD signals transmitted on the cloud, totaling over 7,200 hours of time. In addition to the live broadcast of the games, the Hangzhou Asian Games also provided short videos, highlights, event news and other video content on the cloud.</p><p>Compared with the traditional satellite broadcast mode, the broadcast on the cloud surpasses its bandwidth and offline equipment limitations, providing richer picture signals and editing methods. Dr. Chuchat, Senior Advisor to the Sports Commissioner of Thailand, said that through Aliyun’s cloud broadcasting service, local media in Thailand can pull signal streams at any time and any place, allowing Thai viewers to watch the exciting Asian Games in real time.</p><p>Hangzhou Asian Games through the cloud broadcasting center, with the cloud network’s cross-regional capabilities, to ensure that the Asian Games broadcast signal from the venue fast, stable transmission to the AliCloud video broadcasting center in Shanghai and Beijing, and then through the AliCloud nodes located in Hong Kong, China, Singapore, Mumbai, India and other places, to Asia and even the global audience in real-time broadcasting.</p><h2 id="Creating-the-first-integrated-intelligent-organizing-platform-for-large-scale-sports-events"><a href="#Creating-the-first-integrated-intelligent-organizing-platform-for-large-scale-sports-events" class="headerlink" title="Creating the first integrated intelligent organizing platform for large-scale sports events*"></a><strong>Creating the first integrated intelligent organizing platform for large-scale sports events</strong>*</h2><p>Intelligent base supports a variety of intelligent services for the Asian Games, of which, “Asian Games Nail” is the Hangzhou Asian Games Organizing Committee and Nail to jointly create the world’s first large-scale sports event integration of intelligent race platform, providing services for 100,000 event staff.</p><p>! <a href="https://p2.itc.cn/images01/20231012/b61f826c6ad04b14b2e80244e7c24825.png"></a></p><p>Asian Games Nail accesses 293 applications in various business areas such as administrative approval, meteorological services, conference services, medical services, etc., and accesses a variety of Asian Games core system applications on Ali Cloud, including the competition video system, the IT event tracking and management system, the volunteer management system and so on.</p><p>During the Asian Games, nearly 100,000 staff members realized online flat communication and collaboration through Asian Games Nail. In addition, Asian Games Nail can also support real-time translation in 13 languages, including Chinese, English, Japanese and Thai, which facilitates the mutual communication of staff from different countries.</p>]]></content>
    
    
    <summary type="html">Hangzhou Asian Games realizes 100% of core systems in the cloud.</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>The Elephant&#39;s Turn - How Platform Architecture Embraces Business Innovation</title>
    <link href="https://www.nablepart.com/acaad73d8a80/"/>
    <id>https://www.nablepart.com/acaad73d8a80/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h3 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h3><p>**This is an architecture practice and inspiration session. <strong>If you are in charge of the architect</strong> of a **mega-complex platform (e.g. e-commerce, payment, logistics) and facing various technical liabilities (e.g. architectural complexity, team collaboration complexity), <strong>while the business is facing the transition from platform services, to scenario-based innovation</strong>. Then this article might be fruitful for you:</p><ol><li><p>How to reinvent a 10 year old funding platform architecture?</p></li><li><p>How to build a shared platform architecture to drive a shift in the R&amp;D collaboration model and improve global R&amp;D efficiency?</p></li><li><p>How to build a flexible innovation growth architecture on top of the platform to reduce the opportunity cost of innovation?</p></li></ol><h3 id="I-The-Origin-of-Funding-Innovation-Platforms"><a href="#I-The-Origin-of-Funding-Innovation-Platforms" class="headerlink" title="I. The Origin of Funding Innovation Platforms"></a>I. The Origin of Funding Innovation Platforms</h3><h4 id="1-What-is-a-funding-business-platform"><a href="#1-What-is-a-funding-business-platform" class="headerlink" title="**1. What is a funding business platform? **"></a>**1. What is a funding business platform? **</h4><p>Support for various types of business scenarios ** funds transaction processing platform **, such as a transfer, a red packet behavior (send, receive, return), etc., seemingly simple actions, behind the scenes is actually very complex. First, the complexity of the business, in addition to acquiring transactions, all other can be categorized as funds class transactions, so it carries a very complex business model. Second, from the point of view of the transaction itself, is responsible for the transaction elements of the aggregation, capital flow processing, and finally complete the fulfillment, is the payment system in the top and bottom of the core system.</p><p>! <a href="https://pic3.zhimg.com/80/v2-7da67587bc5b0a55c36613e2e808910e_720w.webp"></a></p><h4 id="2-Platform-to-Scenario-based-Innovation-Transformation"><a href="#2-Platform-to-Scenario-based-Innovation-Transformation" class="headerlink" title="2. Platform to Scenario-based Innovation Transformation"></a><strong>2. Platform to Scenario-based Innovation Transformation</strong></h4><p>Funding business platform has gone through several stages of evolution: from the beginning of the monolithic application, to the service-oriented and platformized in the process of rapid development of the industry . In the recent years, it is the stage of rapid transformation from tool-based products to scenario-based services, how to support innovation and trial and error in a “stable and fast” way is a great challenge to the architecture design.</p><p>The following is a summary of the architecture design. <a href="https://pic1.zhimg.com/80/v2-642e709ab993e464b6928d59f2b6cce0_720w.webp"></a></p><h3 id="Problems-and-contradictions"><a href="#Problems-and-contradictions" class="headerlink" title="Problems and contradictions"></a>Problems and contradictions</h3><h4 id="1-Apparent-Contradictions"><a href="#1-Apparent-Contradictions" class="headerlink" title="**1. Apparent Contradictions"></a>**1. Apparent Contradictions</h4><p><strong>The demand for rapid trial and error of innovative business, and the contradiction between the R&amp;D delivery cycle The threshold of funding innovation is high, and the R&amp;D delivery cycle generally starts at 1 month, and relying on blindly adding people has failed to solve the problem of overall delivery efficiency</strong>, the application architecture of the funding platform has historically followed the products and lines of business, forming a three-legged tripod, and a hundred contending situations (personal, commercial, and group), and with the scaled-up innovation, the business boundaries were gradually broken, and capabilities were duplicated and of uneven quality. Although the team once massively expanded, due to platform liabilities as well as complexity, it was unable to stimulate the motivation of innovation at the business level, resulting in a light head and a heavy foot, unable to take a step forward.</p><h4 id="2-Definition-of-the-Problem"><a href="#2-Definition-of-the-Problem" class="headerlink" title="**2. Definition of the Problem **"></a>**2. Definition of the Problem **</h4><h4 id="2-1-Chimney-style-funding-platform-architecture"><a href="#2-1-Chimney-style-funding-platform-architecture" class="headerlink" title="2.1. Chimney-style funding platform architecture"></a>2.1. Chimney-style funding platform architecture</h4><p><strong>High marginal costs and duplication of capabilities</strong> The chimney architecture of funding platforms has managed to be flexible and locally optimal in historical times. However, with the business development and integration, the cost paid by duplicated construction is far greater than the advantages brought by the flexibility, such as TOB’s enterprise payment on behalf of the company, and TOB’s small wallet, the bottom is based on the innovation of the shared account, all the account asset model, in and out of the gold process, need to build a set of research and development of a huge investment, obviously not in line with the intuition of the business side.</p><p>! <a href="https://pic3.zhimg.com/80/v2-51d24623f0266fb9dc449f7c7e6c0cce_720w.webp"></a></p><p><strong>2.2. Internet Growth Oriented Architecture</strong></p><p>Alipay is good at making payment platforms, but in the face of Internet-oriented scenario construction and business scale growth, it lacks relevant architectural experience, and its R&amp;D efficiency is dragging its feet to keep up with the rhythm of innovation and trial and error.</p><p>! <a href="https://pic3.zhimg.com/80/v2-e534d71d667e4626897d6ca3eccce9b6_720w.webp"></a></p><h3 id="Third-the-overall-design"><a href="#Third-the-overall-design" class="headerlink" title="Third, the overall design"></a>Third, the overall design</h3><h4 id="1-Architecture-Vision"><a href="#1-Architecture-Vision" class="headerlink" title="1. Architecture Vision"></a><strong>1. Architecture Vision</strong></h4><p>This past year, we launched the Funding Innovation Platform project cluster. Defined the architecture goals of a unified platform for funding business innovation, present and future, for the larger funding domain. In terms of global design, we have a vision to design the next generation of innovation-oriented funding platform architecture, so that the funding platform will be more stable and business innovation will be more focused on the delivery of innovation logic, and the following figure shows the direction of the first version of the design at the beginning.</p><p>! <a href="https://pic3.zhimg.com/80/v2-c3aaa24f38c3193a7abbfd607a90c0e2_720w.webp"></a></p><h3 id="Fourth-the-key-architectural-design-a-remodeling-the-funding-platform-architecture"><a href="#Fourth-the-key-architectural-design-a-remodeling-the-funding-platform-architecture" class="headerlink" title="Fourth, the key architectural design (a) remodeling the funding platform architecture"></a>Fourth, the key architectural design (a) remodeling the funding platform architecture</h3><p><strong>Funding platform (fund-type transaction processing), is the root node of all fund-type business innovation</strong>, supporting the growth of all business capillaries at the upper level. However, in the past practice, due to its “three high” characteristics (high technical entry threshold, high implementation complexity, high stability requirements), the delivery cycle is too long, most of the front-line business of the “payment team” of the first impression, seriously restricting the innovative business expansion. Expansion. Based on the governance of the stock structure liabilities (multiple sets of chimneys), as well as judging the future scenario-based innovation, the demand for the efficiency of the delivery of funding capabilities, we have done an overall restructuring of the funding domain architecture. The overall unfolding is divided into three phases:** domain model refactoring, platformization design, and capability productization design. **</p><h4 id="1-Domain-Model-Refactoring"><a href="#1-Domain-Model-Refactoring" class="headerlink" title="**1. Domain Model Refactoring **"></a>**1. Domain Model Refactoring **</h4><p>**1.1 Summarize domain boundaries through use case analysis **</p><p>The domain has to define clear boundaries, i.e., find a suitable demarcation line (what to do and what not to do) to decouple the complex relationships between domains. The key is to identify “conceptual classes” - nouns (domains), adjectives (capabilities), verbs (relationships) - from business use cases. The next step is generalization and clustering to disaggregate into different domains for the purpose of independence and reuse. <strong>Define the domain boundaries between services</strong> Analyze the red packet business as an example (see below): red packets, through a variety of ways to play, pass red packets between friends. Funds, driven by red envelopes, the transfer of funds between the recipient and the payer. After formatting through the requirements, we can easily identify two types of independent domains and capabilities, and in accordance with the reusability, the ability to handle funds is abstracted and sunk so that it can be reused in other scenarios.</p><p>! <a href="https://pic4.zhimg.com/80/v2-3a4546b1b9956d4e635b50ec06d306f3_720w.webp"></a></p><p>**Define the domain boundaries within the service **The essence of the funding domain is to transform the fund transfer demand of the upstream weak transaction type business scenario into a funding order, and to use the order as a carrier for the business behavior payment and asset settlement services around people, accounts, business entities, and business assets as participants. <strong>Money business domain modeling L0:</strong> From a complete business perspective, the <strong>coordination capabilities</strong> of peripheral dependencies and services (cashiering, payment, charging, restriction of rights, security, etc.) are abstracted into reusable <strong>domain components</strong>, and the contractual parameters understood in the order domain, the directive parameters of the source upstream, and the invocation parameters to the downstream system are integrated in the component dimensions, to support the various business activities;</p><p>! <a href="https://pic1.zhimg.com/80/v2-bcd4e0f8966df5123743d7239d2d3cd8_720w.webp"></a></p><p><strong>Funding Core Domain Modeling L1:</strong> Split the previous funding order model into <strong>funding order, funding flow model</strong>, make explicit the difference between the funding order flow related to business actions, and the funding flow related to fund settlement; at the same time, abstract the <strong>participant, business asset model</strong>, and finally, provide funding participant extension, business asset extension, flexible scheduling of the funding order flow, and flexible scheduling of the funding flow capabilities to address the complexity of funds processing and enhance future scalability.</p><p><strong>1.2 Reasoning about Unified Transaction Models through Deduction</strong></p><p>Can transaction models be reused across industries? In addition to the past-oriented generalization, there is also a need for the process of deduction, that is, through the business lifecycle, and business elements, stand in a more business macro perspective to put forward assumptions and reasoning, for example, we assume that the various different types of order transactions, essentially, is the carrier of the collocation of the participant, the asset and the payment, and that this carrier can be combined by different transaction models. Or go back to the business use case of red envelopes just mentioned, red envelopes behind this business represents a multi-stage order flow transaction model, this transaction model can also be reused in other business scenarios, such as nail transfer (need to be confirmed by the payee to receive payment). In addition, we serve a large number of B and C scenarios, precipitated by B2B (<strong>single, pooled</strong>), B2C (single, batch, multi-stage), C2C (single, batch, multi-stage), B2B2B and other transaction patterns, it can be said that the payment system is one of the most complex transaction patterns. **One of the core pain points of the old code, unreasonable design of the order domain model **The old system for the above different transaction scenarios of the domain model design is more rigid: such as batch, two-stage, single, in the code design of the system, the corresponding model and code design are chimney-type design, resulting in the inability to digest the business innovation on the demands of the new transaction model. The order model needs to take on more downstream coordination capabilities, and the chimney-type design of the domain model brings more complexity to the ‘coordination’ work, making it impossible to reuse the coordination capabilities themselves. <strong>Remodeling the order domain model: abstracting out commonality and taking advantage of the polymorphism of the java language</strong> We take the commonality part of the domain model and abstract it into a base order model, on which transaction patterns such as parent-child orders and batch orders are used as derivatives or implementations of the base order model. This allows the downstream coordination of the domain model is no longer understood to the specific transaction model, so that changes and additions to the transaction model of the impact on the system has been effectively controlled.</p><p>! <a href="https://pic3.zhimg.com/80/v2-bb79ade486a6b28131d64b48a48cb082_720w.webp"></a></p><h4 id="2-Platformized-Design-Shared-Business-Platform"><a href="#2-Platformized-Design-Shared-Business-Platform" class="headerlink" title="2. Platformized Design - Shared Business Platform"></a><strong>2. Platformized Design - Shared Business Platform</strong></h4><p>Platformized design, the purpose is to achieve the maximum degree of asset reuse, and can be flexibly extended through configurations. Generally, teams based on platformized design are able to complete the delivery of complete requirements based on interface contracts, such as commodity platforms and fulfillment platforms for e-commerce, and acquiring platforms, payment platforms and billing platforms for payment. What is “shared business platform”? The specificity of the capital business, the existence of multiple business units, multiple technical teams, and in some cases there will be cross (B business technology team, to undertake the needs of the A business team), it is clear that the platform since the closed-loop delivery, it is very easy to form a delivery hotspot. The business technology team is close to the business, and can better prioritize. Based on this consideration, the concept of “shared business platform” was proposed at the beginning of the design of the capital business platform. Similar to a shopping mall complex, the platform team is responsible for the planning and construction of the mall infrastructure (e.g., water, electricity, coal, and regional planning), while the business team can directly lead the business of each store in terms of what kind of business it does, how it is decorated, and how it is marketed, as long as it is in line with the statute of the mall.</p><p><strong>2.1 Key Designs to Achieve Platformization</strong></p><p><strong>Componentized design</strong></p><p>! <a href="https://pic1.zhimg.com/80/v2-104f1a070e9764d27e7f7d6f9af51314_720w.webp"></a></p><p>Inspiration from Ford’s automobile assembly line about how to upgrade from part-level reuse to module- and fragment-level reuse, some of our students suggested that they had previously learned that whole-vehicle architectures had similar reuse designs, and with a learning attitude we shared and studied the evolution of whole-vehicle architectures that have developed for 100 years so far, and unexpectedly came up with the key solution ideas and further confirmed the correctness of our architectural design ideas.</p><ol><li><p><strong>Abstract Modules</strong> - From the differences, compare and contrast to find the commonalities. Abstract modules as the highest common denominator of reusability of all models.</p></li><li><p><strong>Assembling modules</strong> for business customization - Differences within the same module appear to be differences in the functions of several parts, but are essentially differences in the design of multiple parts to address various needs, such as safety, size, energy, emissions, etc. For example, each module has to understand the constraints of “safety level”, and each module of the whole vehicle has different mode performance under different safety levels. To summarize the idea of platform-based vehicle manufacturing: in order to meet different personalized needs and improve assembly line production efficiency, the vehicle architecture is decomposed into a number of independent and interconnected modules in accordance with certain principles, to achieve the maximum generalization of modular parts, and to have the ability to produce models of different positioning and levels by adjusting and combining different modules. **How to realize the adaptation and assembly of components **We were surprised to find that the so-called separate lines, another completely different industries, in solving the architectural problem of the idea is so similar, through the above description, we can summarize the solution to the problem after thinking about the idea:</p></li></ol><ul><li>Parts abstracted into modules, higher dimensional level of abstraction.</li><li>The constraints to be understood by each module are abstracted into a unified interface. The modules do not depend on each other, relying on the indicators constraints interface, interface, that is, dependency inversion.</li></ul><p>Therefore, we will reusable granularity from the original atomized java methods, up to the module level reuse granularity of components, component patterns, process phases, processes.</p><ul><li>reuse granularity is neither too large, too large reuse granularity such as business processes, will lead to the inability to digest more transaction scenarios abstractions and differences, and ultimately lead to a gradual increase in business processes.</li><li>The reuse granularity is neither too small, too small reuse granularity such as java methods are not only unmanageable but also not very reusable.</li></ul><p>! <a href="https://pic1.zhimg.com/80/v2-ac8063017878e08aa2bf461df7367350_720w.webp"></a></p><p><strong>Extensible design</strong>*</p><p>Components get abstracted and reused, but we also need to meet the customization requirements of different scenarios, the same are A-&gt;B funds transfer, but in different scenarios, the payment rules are different, such as the refund cycle, the amount of money, the payment channel, the security of the wind control and so on.</p><p>! <a href="https://pic2.zhimg.com/80/v2-082018b7912b627f416247ac5a8093f5_720w.webp"></a></p><p>We then seem to car sales manufacturers, what is the solution in response to customer customization, first of all, although many vehicle manufacturers are communal production platforms, but because of the different positioning requirements of the market, the product will be classified as a number of models of sedans, a number of models of SUVs, a number of models of MPVs. These different models themselves are different products with different “product-specific constraints” to be made in the same platform architecture. Moreover, when we enter a car 4S store, the salesgirl will often let you choose a number of “packages” for a certain model, such as the Night Edition, Sports Package, etc. After analyzing the above platform architecture thinking, it is not difficult to find out that these so-called “packages” are in fact some “specific market demand constraints” for these car models, such as under the constraints of the “sports package”, the wheel parameters of the car will become bigger, the seat will be retrofitted with a sports waist protector kit, and the color parameters of the car will become a cool blue, and so on. And when you ask, can you add heated seats to the sport package, most manufacturers, probably, support these more personalized customizations.</p><p>! <a href="https://pic4.zhimg.com/80/v2-b30621f000c6909b396ef19cf643c913_720w.webp"></a></p><p>(Business Architecture Layering)</p><p>By analogy, we can also provide customers with <strong>standard products, add-on packages</strong>, and personalized customization based on a platform-based architecture. We can build a product layer that selects specific product features to be packaged as standard products based on the scalable and configurable capabilities of the platform layer. At the same time for the customization of the business, to provide optional “business capabilities” packages, but also based on the business application of different ** business identity **, do personalized business configuration and business code expansion. Many platforms in the industry, the business capability design of Ali e-commerce, in essence, is also through the form of layered delivery, layered governance, the formation of different levels of business reuse architecture, which also coincides with our architectural design ideas.</p><p><strong>2.2 Shared development and shared runtime</strong></p><p>Scenario extensions and platform services are isolated to realize the shared development model</p><p>! <a href="https://pic2.zhimg.com/80/v2-6b0959eb21fe27dc64317ef4e8e34d75_720w.webp"></a></p><p>After architectural decisions, we discarded the past (product layer prod) and (core layer core) divided into two applications application architecture, merged into a large platform type application, to business platform type application architecture evolution. From the front product layer decision-making extension to the scenario-based decision-making extension within the platform, we completely divorced the scenario-based extension from the platform in the form of extension packages in the code and put forward the “business containerized delivery” mode, decoupling the research and development and deployment of the scenario-based extension packages by using the serverless application architecture model. This way of decoupling business and platform through Serverless ark module package is very suitable for shared R&amp;D team because they can’t split the application and have to do business delivery on a platform, which can achieve complete isolation of platform and business in code and running state, and pure and non-polluting platform-side capability; and also can maximize the reuse efficiency of platform capability.</p><h4 id="3-Platform-capability-productization-design"><a href="#3-Platform-capability-productization-design" class="headerlink" title="3. Platform capability productization design"></a><strong>3. Platform capability productization design</strong></h4><p><strong>The software world and the real world should be a continuous fitting process</strong> - Through model refactoring, and platform design, the assembly of funding capabilities, can be said to be more flexible, but we realize that just for the technical flexibility is not enough, we hope that the internal capabilities of the technology platform, can be fully described and externalized into the product capabilities (product process, product functional parameters), only in this way the platform can be continuously fresh and maintain iterative. only then can the platform continue to stay fresh and remain iterative.</p><p><strong>3.1 Product and Technology Capability Fitting</strong></p><p>We all claim that we are doing business, but our code can not find these core business concepts, which leads to business and technical communication and synergy is extremely inefficient, the original workload of 1 week, basically 1 month to get off the ground, while the real code may be a few lines to get it done, this type of problem is often very common in the payment business. As shown in the figure below, in the past, we received the requirements of the model are such a generic pipeline, the ability to reuse is entirely dependent on the architect in the process of receiving the requirements of the side side of the abstract, in the end, what abstraction of the ability to PD a lot of times rely on the word of mouth. We often talk about domain-driven design - more emphasis on business domain architecture, the abstractor of the domain and the business should be closely connected, while our actual situation may not be so. Therefore, the reason why there are difficulties in feature reuse, product inheritance, and weak product operation capability is that “product capability” is inconsistent with “technical abstraction”.</p><p>! <a href="https://pic1.zhimg.com/80/v2-018b3f05644419a9851f9ec1c53e85ec_720w.webp"></a></p><p>(The description of product and technical capabilities relies on experience)</p><p>**3.2 Explicit Expression of Product Capability</p><p>So, is there a way to make the product capability and technical abstraction bite more closely, or even make the product capability become the “nocode of the product function” that the product manager can write, and the code of the java written by the R&amp;D can establish some kind of correlation? We think it is possible, we just need to do:</p><p>**1) Do delivery of technical abstract code as standardized components and component extension points. **</p><p>**2) Give the PD the Product Functionality Workbench to allow product functionality to be defined in a non-codified way. **PDs create a new funding product on the workbench and do the abstraction of the product functionality. We visualize the abstraction of the product functionality as allowing PDs to design a form form in which this functionality can be expressed, and this form can subsequently be used as an operable position for the product, which we call the operational view.</p><p><strong>3) Establish the connection relationship between the product function defined by PD and the abstract code of technology</strong> (merge, association, cascade, with or, conditional, etc. relationship)</p><p>! <a href="https://pic4.zhimg.com/80/v2-8bd3de271d711d08b3f13c110b69b1cb_720w.webp"></a></p><p>(Technical specification converted to product specification)</p><h3 id="V-Key-Architecture-Design-II-How-to-Make-Innovation-Run-Faster"><a href="#V-Key-Architecture-Design-II-How-to-Make-Innovation-Run-Faster" class="headerlink" title="V. Key Architecture Design (II) - How to Make Innovation Run Faster"></a>V. Key Architecture Design (II) - How to Make Innovation Run Faster</h3><p>In the previous section, we focused on the platformization and assembly line production of capital capabilities. In this section the focus is on how we can improve the speed of business end-to-end R&amp;D delivery to help scale the growth of funding scenario innovation. If the capital transaction ability is the core engine, then the capital scenario innovation is to connect the various components to build a complete vehicle, and quickly sell it to the market to achieve growth. The key here is to be “lightweight” and “agile”. Back to payment, through the past experience of multiple product landing, we also summarized the product life cycle of innovative business, roughly divided into three stages: ** scene construction, operational growth, insight iteration **, as shown in the figure below.</p><p>! <a href="https://pic3.zhimg.com/80/v2-d19aeda3937747407b1635a64ba62246_720w.webp"></a></p><p>It is not difficult to find that scene construction, in general, requires nothing more than three major types of capabilities:</p><p><strong>1) scenario model-oriented construction CRUD (new)</strong>: for example, to do an account relationship and aggregation, billing dynamics and comments, these have domain characteristics, and looking all over the station can not find reusable platform system, at this time we are required to deliver the CRUD capabilities of the new domain services.</p><p><strong>2) Middle-platform type domain service orchestration (reuse)</strong>: for example, it involves accounts (shared accounts), fund-type transactions (recharge, transfer, withdrawal), social, asset core, cashier payment, security and other domain capabilities, these capabilities we obviously do not need to duplicate the construction, we only need to coordinate the ant’s huge middle-platform system, the following figure our link in the fundapplication relies on the The ability of many other domains, finally we orchestrate and open the mobilegw interface, and agreed with the front-end interface protocol standards.</p><p>**3) Finally, through one front-end component after another, we build out the page and process. **Summary: the domain essence of funding scenario innovation &#x3D; the construction of the scenario domain CRUD (new) + orchestration and aggregation of existing domain capabilities</p><h4 id="1-Lightweight-build-and-growth-system"><a href="#1-Lightweight-build-and-growth-system" class="headerlink" title="1. Lightweight build and growth system"></a><strong>1. Lightweight build and growth system</strong></h4><p>There is a part of the scene construction, are lightweight pages and processes, to cite a few of our CY21 years of innovation business common cases, such as ants together flowers (small wallet) product account opening pull new scene, transfer living expenses scene, C &#x2F; B of the new balance of small program positions.</p><p>! <a href="https://pic2.zhimg.com/80/v2-35570d233fc2b13025574fbb8d18465d_720w.webp"></a></p><p>These partial fixed positions, marketing positions of product development, basically divided into the front-end small program interaction page (front-end) + content services (server-side) + simple functional interaction services (server-side), in addition to the product and operation students may mention some of these positions for the refinement of the operational requirements as well as the operation of the post-operational data analysis requirements. <strong>One-stop building and growth platform</strong> In CY21, we started to build a low-code building platform: a one-stop operation platform with various capabilities such as refined operation capability, fast building, data intelligence, and multi-opening, etc., which will improve the efficiency of R&amp;D and operation. **1) Rapid construction: **For the field&#x2F;position&#x2F;product page Provide a one-stop solution for rapid low-code construction, enhance the R&amp;D efficiency of front-end and back-end students. -Essentially the pattern is relatively fixed in this field, the traditional manual service-oriented aggregation mode upgraded to the front and back-end integration of the template way to do product upgrades. <strong>2) refined operation:</strong> can be business self-service one-stop page&#x2F;module based thousands of people face refined operation capabilities, greatly improving the efficiency of business operations. –Templating must be able to nocode way evolution, nocode will certainly bring changes in production relations, this part of the work from the technical people into the business self-service, can be said to be a win-win situation. <strong>3) Data Intelligence:</strong> complete all-link data standards and standard ride cast data effect products, through the unification of buried points, off-line data insight system, injected into the operational process of the operational strategy and business indicators, to meet the operational students in the operational process of the data insight needs, to enhance the transformation of business growth. –After unified templatization, the data is easily unified.</p>]]></content>
    
    
    <summary type="html">The Elephant&#39;s Turn - How Platform Architecture Embraces Business Innovation</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>How to Diversify Microservices Governance in JavaAgent with Dynamic Configuration Center</title>
    <link href="https://www.nablepart.com/c56a72554977/"/>
    <id>https://www.nablepart.com/c56a72554977/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="I-Preface"><a href="#I-Preface" class="headerlink" title="I. Preface"></a>I. Preface</h2><p>With the wide application and development of JavaAgent in microservice governance, we can monitor, manage and adjust microservices at runtime to meet different business requirements and operating environments. However, as the complexity of microservice architectures increases, it becomes more and more difficult to manage and configure the governance of microservices, so it becomes crucial to implement microservice diversity governance in JavaAgent using dynamic configuration centers.</p><p>Sermant is an agentless service grid based on Java bytecode enhancement technology that supports diverse governance of microservices through dynamic configuration. The following is the microservice architecture of Sermant:</p><p>Sermant does not directly provide a dynamic configuration center , but Sermant based on different configuration centers to achieve the dynamic configuration function , based on the function Sermant can not only listen to the mainstream configuration center configuration information modification , but also for different scenarios for configuration listening , for example : Sermant can not only listen to the service configuration changes , but also can listen to the application of the global configuration changes. Based on this feature can better help developers and operation and maintenance personnel to manage the ability of microservice governance.</p><h2 id="Sermant’s-Dynamic-Configuration-Model"><a href="#Sermant’s-Dynamic-Configuration-Model" class="headerlink" title="Sermant’s Dynamic Configuration Model"></a>Sermant’s Dynamic Configuration Model</h2><p>Sermant Dynamic Configuration Model is a configuration management solution based on a hierarchical model, and its core components include Group and Key. Sermant isolates configuration items through different Groups (grouping information) to make configuration management more flexible and scalable; at the same time, it identifies specific attributes of the configuration items through the Key to realize precise control and efficient maintenance of configuration items. At the same time, the identification of specific attributes of configuration items by Key realizes precise control and efficient maintenance of configuration items.</p><p>In the Sermant dynamic configuration model, the number of Groups should not be too large; the implementation of Groups in the Sermant dynamic configuration model is based on the data model of the configuration center, for example, the namespace of Nacos, which is used to realize the isolation of the tenants, and should not be too large, so the number of Groups in the Sermant dynamic configuration model should not be too large. The number of Groups for the Sermant dynamic configuration model should also not be too large.</p><p>In contrast to Groups, the Sermant dynamic configuration model allows the creation of multiple Keys, but a single instance should not subscribe to too many Keys. This is because in the process of subscription and maintenance of configuration items, if a single instance subscribes to too many Keys, it may lead to performance problems of the service, and even cause problems of untimely configuration updates or configuration conflicts. Therefore, controlling the number of Keys subscribed by a single instance is one of the key factors to ensure the efficiency and availability of configuration management.</p><p>Sermant Dynamic Configuration Model realizes comprehensive coverage and efficient support for complex configuration scenarios through the organic combination of Groups and Keys. At the same time, by controlling the number of Groups and Keys, it simplifies the subscription and update process of configuration items and improves system availability and maintainability.</p><p>Sermant’s dynamic configuration model is shown below:</p><ul><li>** Configuration model implementation based on Zookeeper** ** **</li></ul><p>Zookeeper uses a tree-like data model that will store data information on a single data node, these nodes are called Znode, Znode is the smallest data unit in Zookeeper, a Znode can have multiple child nodes. A unique Znode that can be determined by a path such as &#x2F;zookeeper&#x2F;key1. as shown in the following figure:</p><p>Sermant’s dynamic configuration model is based on the Zookeeper Configuration Center implementation as shown below:</p><p>Group (grouping information): Parent node path information of the Znode node as Group information</p><p>Key (configuration item name): the node name of the Znode node is used as the Key of the configuration item, and the node data is used as the specific configuration content.</p><p>Znode nodes can be isolated through different paths, and nodes with the same node name can exist under different paths. Guaranteed Sermant isolates configuration items through different Groups (grouping information), and the same Key (configuration item name) can exist under the same Group (grouping information).</p><ul><li>**Configuration model implementation based on Nacos ******</li></ul><p>Nacos data model is a hierarchical model, for Nacos configuration management, through Namespace, Group, Date ID can locate a configuration set. As shown in the following figure:</p><p>Namespace (Namespace) is mainly used for configuration isolation in different environments**, <strong>Group (configuration grouping) is mainly used for different projects or applications</strong>, **Configuration set can contain a variety of configuration information for the system, each configuration set ID that is Data Id. one configuration content contained in the configuration set is a configuration item. It represents a specific configurable parameter and its value field, usually in the form of key&#x3D;value.</p><p>Sermant’s dynamic configuration model is based on the Nacos Configuration Center implemented as follows:</p><p>Group: Nacos Namespace and Groups are combined as group information for Sermant’s dynamic configuration model.</p><p>Key (configuration item name): Nacos configuration set ID (Data Id) as the configuration item name of Sermant’s dynamic configuration model.</p><p>Configuration set (Data Id) can be segregated by different Namespace (Namespace) and configuration group (Group), different Namespace (Namespace) and configuration group (Group) can exist under the same name configuration set (Data Id). Ensure that Sermant segregates configuration items by different Groups, and that the same Key (configuration item name) can exist under the same Group.</p><ul><li><strong>Configuration model implementation based on ServiceComb Kie</strong> **</li></ul><p>ServiceComb-Kie (ServiceComb Key-Value Store) data model is based on Key-Value pairs to store and manage configuration information.ServiceComb-Kie controls the scope of effect of a configuration based on tags, and unique key-value pairs can be identified through tags and Keys.</p><p>Sermant’s dynamic configuration model based on ServiceComb-Kie is implemented as follows:</p><p>Group (grouping information): ServiceComb-Kie’s label information is used as the Group information of Sermant’s dynamic configuration model.</p><p>Key (configuration item name): The configuration item name of ServiceComb-Kie is used as the Key of Sermant’s dynamic configuration model.</p><p>Configuration items of ServiceComb-Kie can be segregated by different label information. Configurations with the same configuration item name can exist under different labels, and different configuration items can be configured under the same label information. It is guaranteed that Sermant isolates configuration items by different Group (grouping information) and the same Key (configuration item name) can exist under the same Group (grouping information).</p><h2 id="III-Best-Practices-for-Sermant’s-Dynamic-Configuration-Model"><a href="#III-Best-Practices-for-Sermant’s-Dynamic-Configuration-Model" class="headerlink" title="III. Best Practices for Sermant’s Dynamic Configuration Model"></a>III. Best Practices for Sermant’s Dynamic Configuration Model</h2><p>Dynamic configuration is one of the core features of Sermant, which can help Sermant to unify the management of microservice governance capabilities, such as: gray scale release, flow limitation and degradation, link tracking, etc.. As shown in the figure below:</p><p>When using Sermant Dynamic Configuration for microservice service governance, avoid creating too many Group (grouping information) and Key (configuration item name) to reduce the complexity and confusion of the configuration, reduce the performance consumption caused by listening to too many configurations, and also improve the maintainability and readability of the configuration.</p><p>Next we explain the best practice of Sermant dynamic configuration through the labeled routing plugin.</p><h2 id="1-Label-Routing-Plugin-Features"><a href="#1-Label-Routing-Plugin-Features" class="headerlink" title="1) Label Routing Plugin Features"></a>1) Label Routing Plugin Features</h2><p>Label Routing Plugin is the basis for Sermant to realize the microservice routing governance function. Label Routing Plugin configures the routing rules by service granularity or global granularity for service providers, divides the providers of a certain service or multiple services into the same grouping, and constrains the traffic to flow only in the specified grouping, so as to realize the purpose of traffic segregation. The label routing plug-in is also the capability base for scenarios such as traffic coloring, blue-green release, grayscale release, all-link grayscale, and same-availability-area priority invocation.</p><h2 id="2-Dynamic-Configuration-Model-of-Label-Routing-Plugin"><a href="#2-Dynamic-Configuration-Model-of-Label-Routing-Plugin" class="headerlink" title="2) Dynamic Configuration Model of Label Routing Plugin"></a>2) Dynamic Configuration Model of Label Routing Plugin</h2><p>The label routing plug-in is based on Sermant’s dynamic configuration model for rule configuration. In the microservice scenario, the dynamic configuration model of labeled routing plug-in is implemented as follows:</p><p>Group (grouping information): based on the application name appName and environment name environment composition, for example: app &#x3D; $ { appName } &amp; environment &#x3D; $ { environment }</p><p>Key (configuration item name): for rules with service granularity, the key value is servicecomb.routeRule.${srviceName}, and ${ srviceName } is the microservice name of the target application. For rules targeting global granularity, the key value is servicecomb.globalRouteRule.</p><p>The configuration model implementation of the tagged routing plugin based on zookeeper configuration center is shown below:</p><p>Through the tag routing plugin based on the Sermant dynamic configuration model implementation we can see that the Sermant dynamic configuration model of the Group (grouping information) support based on the name of the application and the environment name generation, at this time for a single microservices application only need to create a Group (grouping information), you can avoid the creation of too many Group (grouping information). Sermant Dynamic Configuration Model Key (configuration item name) supports generation based on the microservice scenario and service name, at this time you can target a single service, you can also target all the services for dynamic configuration, to ensure that a single instance does not need to listen to multiple configurations, to avoid listening to multiple configurations leads to service performance degradation, configuration updates are not timely or configuration conflicts and other issues.</p><h2 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h2><p>Dynamic configuration in JavaAgent plays an important role in realizing the diverse governance of microservices, and is one of the important means of realizing microservice governance. Through dynamic configuration, the runtime state of microservices can be dynamically adjusted to realize the dynamic governance of microservices. For example, the load balancing policy of microservices can be adjusted through dynamic configuration according to the actual load situation to achieve more refined load balancing.</p><p>Using Sermant’s dynamic configuration model for microservice governance not only realizes the dynamic governance of microservices, but also reduces the consumption of JavaAgent in microservice governance due to too many Groups or too many configuration items listened to by instances, and improves the maintainability and readability of the configuration. In addition, Sermant’s dynamic configuration model also supports the mainstream configuration center ServiceComb Kie, Zookeeper, Nacos, which can meet the use of different microservice governance scenarios, making it more convenient for users to carry out microservice governance and operation and maintenance operations.</p>]]></content>
    
    
    <summary type="html">How to Diversify Microservices Governance in JavaAgent with Dynamic Configuration Center</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>How to Implement Gateway Services with Nginx on Low-Code Platforms</title>
    <link href="https://www.nablepart.com/a15fed194a87/"/>
    <id>https://www.nablepart.com/a15fed194a87/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><strong>Preface</strong></p><p>In a typical system deployment architecture, an application server is a software or hardware system that carries the core logic of an application program. It receives requests from clients and handles the corresponding business logic, data manipulation, and other tasks. Application servers are typically used to support Web applications, mobile applications, enterprise applications, and so on. On top of the application server is usually the gateway server, and below it is the database service. Interestingly, in low-code platforms, there are also application servers. Today, I will take GrapeCity’s enterprise-grade low-code development platform - <a href="">living word grid</a> as an example to introduce the auxiliary role of gateway servers for low-code platforms. ! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103026740-1645389022.png"></a></p><p><strong>Application Scenarios Realized with Nginx</strong></p><p>In this article, the gateway server Nginx will be used as an example to show four scenarios of gateway services:</p><ol><li>cross-domain access: allow multiple applications to share the same server port.</li><li>Static resources: authentication via WeChat public platforms, etc.</li><li>IP black and white list: to meet higher security requirements.</li><li>access logs: detailed records and analyze system responsiveness.</li></ol><p>**1. Cross-domain access: allow multiple applications to share the same port on the same server **</p><p>Splitting multiple modules of the same system into a number of applications is a highly recommended practice pattern for both development management and system operation and maintenance. However, if the front-end page of one application needs to call the server-side commands of another application, it will encounter the problem of cross-domain access. In order to solve this problem, we need to move all these cross-application calls to server-side commands: the front-end page of application A calls the server-side commands of application A, and the server-side commands call the WebAPI of application B. This approach adds to the workload of developing the server-side commands of application A, and there will be additional work and risk in the later maintenance. ! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103043740-473949187.png"></a></p><p>In coded development, this cross-domain access problem is usually avoided by unifying all applications into the same address and port through a gateway. And in low-code development, the solution is the same. ! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103059117-1118130201.png"></a></p><p>Set up an Nginx server on the server and use multiple applications as upstream of Nginx. the specific configuration is as follows:</p><p>(1) Modify the Nginx configuration to configure an upstream node below the http node for the management console and each application, containing the machine name and port number. In the test environment, Nginx is installed on the application server, so here you can directly use localhost. in general, Nginx needs to be deployed to the application server and another server in the same LAN, this time, you need to replace localhost with the server’s intranet IP.</p><p>Tip: Adding upstream to the application server instead of jumping directly in the location improves the readability of the configuration file.</p><figure class="highlight nginx"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="section">upstream</span> us-server&#123;</span><br><span class="line">     <span class="attribute">server</span> localhost:<span class="number">22345</span>;</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">   <span class="section">upstream</span> red-server&#123;</span><br><span class="line">     <span class="attribute">server</span> localhost:<span class="number">9101</span>; &#125;</span><br><span class="line">   &#125;</span><br><span class="line"></span><br><span class="line">   <span class="section">upstream</span> green-server&#123;</span><br><span class="line">     <span class="attribute">server</span> localhost:<span class="number">9102</span>; &#125;</span><br><span class="line">   &#125;</span><br></pre></td></tr></table></figure><p>(2) Listen on port 80 or other specified port number in the http→server node.</p><figure class="highlight nginx"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">listen</span> <span class="number">80</span>; &#125;</span><br></pre></td></tr></table></figure><p>(3) In the http→server node, configure a location node for the management console and for each application, containing the rules for matching URLs and the corresponding upstreams. the most commonly used rule is to match from the beginning, i.e. ^~ means to start with the string that comes after it, e.g. location ^~ &#x2F;red&#x2F; matches all locations that start with &#x2F;red&#x2F; (URLs that start with &#x2F;red&#x2F;), and the most commonly used rule is to match from the end. (the part of the URL after the port number).</p><figure class="highlight nginx"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="section">location</span><span class="regexp"> ^~</span> /UserService/ &#123;</span><br><span class="line">      <span class="attribute">proxy_pass</span> http://us-server/UserService/;</span><br><span class="line">      <span class="attribute">proxy_redirect</span> default;   </span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line"><span class="section">location</span><span class="regexp"> ^~</span> /red/ &#123;</span><br><span class="line">  <span class="attribute">proxy_pass</span> http://red-server/red/; <span class="attribute">proxy_redirect</span> default; &#125;</span><br><span class="line">  <span class="attribute">proxy_redirect</span> default; &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="section">location</span><span class="regexp"> ^~</span> /green/ &#123;</span><br><span class="line">  <span class="attribute">proxy_pass</span> http://green-server/green/; <span class="attribute">proxy_redirect</span> default; &#125;</span><br><span class="line">  <span class="attribute">proxy_redirect</span> default; &#125;</span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure><p>(4) On the Live Grid Management Console, change the application’s “Domain Name” (Application Management → Applications → General Settings → Set Domain Name) to the port on which Nginx listens to ensure that the external navigation of the page works correctly. The configuration of the application needs to include the application name.</p><p>! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103156228-1909080014.png"></a></p><p>The “domain name” of the management console also needs to be set (Settings→Security Settings→Set the domain name of the management console site), and the configuration of the console does not include the application name. The console configuration does not include the application name. <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103207817-1202034282.png"></a></p><p>Extended Scenarios:<br>If your IT security policy requires that only ports 80&#x2F;443 can be opened, but you need to access to multiple applications, you can also use this scenario to achieve port unification.</p><p><strong>2. Static resources: authentication via WeChat public platform, etc.</strong></p><p>When docking WeChat public platform and other third-party systems, the other party usually proposes a file-based domain name verification mechanism, such as WeChat public platform’s JS interface domain name verification needs to be put into the root directory of the domain name of a specific file. ! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103227297-1612904409.png"></a></p><p>At this time, you can use the gateway’s static resource server capability to complete the verification work.</p><p>! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103254297-713067348.png"></a></p><p>Continue to configure the Nginx file on the basis of [1. Cross-domain access] as follows:</p><p>(1): In the interface of domain name management, point the domain name filed through ICP to the external network address of Nginx server.</p><p>(2): Store the static files that need to provide access to the outside world in the Nginx static resource root directory on the Nginx server, such as &#x2F;etc&#x2F;Nginx&#x2F;html (the root directory may also be &#x2F;usr&#x2F;share&#x2F;Nginx&#x2F;html or &#x2F;var&#x2F;www&#x2F;html for different installations and versions; by default, the Nginx root directory has By default, the Nginx root directory has two files: index.html and 50x.html.) Nginx makes the files in the root directory available for external use as static Web resources. Specifically, when receiving a request with the URL &#x2F;xxx.yyyy, Nginx returns the contents of the xxx.yyyy file in the root directory as a response.</p><p>! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103311155-1028781805.png"></a></p><p>**3. IP Black&#x2F;White List: Satisfying Higher Security Requirements</p><p>For some application scenarios with high security requirements, it is often required to do black and white lists, such as allowing only specific IPs to access, or disallowing a specific IP to access. These tasks are recommended to be performed on the gateway to block the risk outside the application server.<br>Because the management console built into Live Grid contains sensitive operations such as application management, user role management, and so on, many organizations require that whitelisting controls be enabled for this application, allowing access only from IP addresses dedicated to the company’s IT operations team. Next, we continue to improve the Nginx configuration to achieve whitelisting on the basis of [II, cross-domain access]. Specific methods of operation are as follows:</p><p>Modify the Nginx configuration in the http ¡ú server node to find the corresponding location of the management console, append the following content below to add the intranet 10.32.209.252 and extranet 113.132.178.118 to the whitelist.</p><figure class="highlight nginx"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="section">location</span><span class="regexp"> ^~</span> /UserService/ &#123;</span><br><span class="line">      <span class="attribute">proxy_pass</span> http://us-server/UserService/;</span><br><span class="line">      <span class="attribute">proxy_redirect</span> default;  </span><br><span class="line">      <span class="attribute">allow</span> <span class="number">10.32.209.252</span>;</span><br><span class="line">      <span class="attribute">allow</span> <span class="number">113.132.178.118</span>;</span><br><span class="line">      <span class="attribute">deny</span> all; &#125; </span><br><span class="line">    &#125;</span><br></pre></td></tr></table></figure><p>Important Tip:<br>The whitelisting level at the gateway level is higher than the system firewall, and the two are not substitutes. You still need to use firewall policies to avoid exposing unnecessary ports to reduce security risks.</p><p>**4. Access Logs: Detailed Records and Analysis of System Responsiveness **</p><p>When you need to evaluate the system’s response performance, availability and other parameters, looking for improvement, you need to record the application access logs through a third party, and then connect it to the mainstream log processing and analysis tool chain (log analysis is a “high-tech” field, there are already mature solutions, such as ELK) for follow-up processing.</p><p>! <a href="https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103329786-879085603.png"></a></p><p>The good news is that Nginx has a built-in logging mechanism, you only need to do a very simple configuration, you can get the desired logs, and then follow the ELK help file, you can get your own log analysis platform. Continue to continue to improve the Nginx configuration and logging configuration on the basis of [3. IP black and white list]. Specific methods of operation are as follows:</p><p>(1): Modify the Nginx configuration in the http node to add a log template named json for filebeats to grab.</p><p>&#96;&#96; nginx<br>log_format json escape&#x3D;json ‘{“time_local”:”$time_local”, ‘<br>              ‘“remote_addr”:”$remote_addr”, ‘<br>              ‘“request_uri”:”$request_uri”, ‘<br>              ‘“status”: $status, ‘<br>              ‘“upstream_time”: “$upstream_response_time”}’; ‘“upstream_time”: “$upstream_response_time”;’</p><pre><code>(2): Modify the http→server node to append the configuration of the access log, specifying the file path and the template access\_log /var/log/Nginx/access.log json just defined named json; ! [](https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103341826-1640239524.png)Commonly used log items and parameters are shown below: ! [](https://img2023.cnblogs.com/blog/139239/202309/139239-20230927103353073-707490179.png)**Summary**The configuration files used in this article are linked as attached, using the simplest configuration. Among them, worker\_processes and worker\_connections are related to resource usage and performance. Please make appropriate adjustments according to the machine configuration.</code></pre>]]></content>
    
    
    <summary type="html">How to Implement Gateway Services with Nginx on Low-Code Platforms</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>How to build three layers of protection for your code in software development</title>
    <link href="https://www.nablepart.com/adaa243b7ee8/"/>
    <id>https://www.nablepart.com/adaa243b7ee8/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>In the application process of DevSecOps, static analysis tools undertake a very important task of looking after code quality and security in the development phase. In this paper, based on the differences in the development environment, code characteristics, and inspection tool capabilities in different locations of the development process, we propose the need to deploy inspection tools according to local conditions to form a progressive three-layer code security defense system, so as to improve the overall security of the application software, and at the same time, effectively implement the strategy of security left shift to reduce the cost of maintenance of security issues.</p><h2 id="1-DevSecOps"><a href="#1-DevSecOps" class="headerlink" title="1. DevSecOps"></a>1. DevSecOps</h2><p>In recent years, the proportion of DevSecOps introduction in large enterprises has been increasing year by year, from 41.3% in 2020 to over 63.5% in 2022, with a compound growth rate of over 20%.</p><p>The term DevSecOps was first coined by Gartner in 2012 and has gradually become a hot topic in software development over the last few years.DevSecOps bridges the gap between developers, testers, security teams, and Ops teams; it improves communication and collaboration between teams, with the goal of delivering faster and more efficiently.DevSecOps, which adds to DevOps, is a new approach to software development. DevOps adds the activity of security to the foundation of DevOps, which improves security while guaranteeing rapid development and rapid deployment, embedding security into the application to be able to respond to security threats more quickly.</p><p>The following diagram shows the nine phases of the DevSecOps software lifecycle as defined by the U.S. Department of Defense (DOD): Plan, Develop, Build, Test, Release, Deliver, Deploy, Operate, and Monitor. Security is embedded in each of these phases.The DevSecOps lifecycle is highly adaptable and has many feedback loops to drive continuous improvement.DevSecOps poses new challenges to traditional software development in terms of management philosophies, management methodologies, organizational structures, development processes, software development, development platforms, tool integrations, and corporate culture.</p><p>In the application process of DevSecOps, static analysis tools assume a very important role of code quality and security care in the development phase. This paper focuses on the important role of software static analysis tools in the development domain of DevSecOps.</p><h2 id="2-DOD-DevSecOps"><a href="#2-DOD-DevSecOps" class="headerlink" title="2. DOD DevSecOps"></a>2. DOD DevSecOps</h2><p>The U.S. Department of Defense (DoD) has been publishing a series of documents related to DevSecOps since 2021: the DoD Enterprise DevSecOps Strategy Guide, DoD Enterprise DevSecOps Fundamentals, DevSecOps Reference Designs, DevSecOps Operations Manual, and other related supporting documents.</p><p>This May was supplemented with the DevSecOps Foundation Guide: activities and tools. In this document the security activities and corresponding tools that need to be accomplished during the DevSecOps lifecycle are more clearly defined. The security activities involved are listed in the following table:</p><p>| Security Activities | Phases | Dependent Tools |<br>| — | — | — | — |<br>| Task-Based Cyber Risk Assessment | All | Task-Based Cyber Risk Assessment Tools |<br>| Threat Modeling | Programs | Threat Modeling Tools |<br>| <strong>Code Commit Scanning</strong> | <strong>Development</strong> | <strong>Code Warehouse Security Plugin</strong> |<br>| Secure Code Development | Development | IDE |<br>| <strong>Static Code Scanning Before Commit</strong> | <strong>Development</strong> | <strong>IDE Security Plugin</strong> |<br>| Dependency Component Vulnerability Checking | Build | Dependency Checking &#x2F; Bill of Materials Checking Tools |<br>| <strong>Static Application Security Testing and Scanning (SAST)</strong> | <strong>Build</strong> | <strong>Static Application Security Testing and Scanning Tool (SAST)</strong> |<br>| Database Security Testing | Testing | Security Compliance Tools |<br>| Dynamic Application Security Testing and Scanning (DAST) | Testing | Dynamic Application Security Testing and Scanning Tool (DAST) or Interactive Application Security Testing Tool (IAST) |<br>| Interactive Application Security Testing (IAST) | Testing | Dynamic Application Security Testing and Scanning Tool (DAST) or Interactive Application Security Testing Tool (IAST) | Manual security testing (e.g., penetration)<br>| Manual Security Testing (e.g. penetration testing) | Testing | Various tools and scripts (may include network security testing tools) |<br>| Service Security Testing | Testing | Security Compliance Tools | Post-Deployment Security Scanning | Post-Deployment Security Scanning | Post-Deployment Security Scanning<br>| Post-Deployment Security Scanning | Deployment | Security Compliance Tools |<br>| Compliance Monitoring (Resources and Services) | Monitoring | Compliance Tools, Operations Kanban |<br>| Compliance Monitoring | Monitoring | Compliance Tools, Operation Kanban |<br>| Database Monitoring and Security Auditing | Monitoring | Compliance Tools, Operations Kanban | | Compliance Monitoring<br>| Runtime Application Security Protection (RASP) | Monitor | Security Compliance Tools |<br>| System Security Monitoring | Monitoring | Information Security Continuous Monitoring (ISCM) | SBOM Software Composition Analysis<br>| SBOM Software Composition Analysis | Late Build | SBOM and Software Factory Risk Continuous Monitoring Tools |<br>| SBOM and Software Factory Risk Continuous Monitoring Tool | SBOM and Software Factory Risk Continuous Monitoring Tool | SBOM and Software Factory Risk Continuous Monitoring Tool | SBOM and Software Factory Risk Continuous Monitoring Tool<br>| Interface Security Testing | Build | API Security Testing Tools |<br>| Cooperative and Adversarial Testing | Operations | Cooperative and Adversarial Testing Tools | Continuous Network Operations Testing | Continuous Network Operations Testing | Continuous Network Operations Testing<br>| Continuous Network Operations Testing | O&amp;M | Continuous Network Operations Testing Tools |<br>| Engineering Obfuscation | O&amp;M | Engineering Obfuscation Tools |</p><p>From this activity table we can see that there are three key security checkpoints during the development process, which we have highlighted in red in the table), and the information about these three checkpoints from the documentation has been combined into the following table:</p><p>| Activity | Baseline | Description | Inputs | Outputs | Dependencies | Tools<br>| — | — | — | — | — | — | — | — | —<br>| Static Analysis of Code Before Commit | Requirements | Scans and analyzes code as developers write it. Notify developers of potential code weaknesses and make recommendations for remediation. | Source Code, Known Weaknesses | Discover weaknesses in code | IDE Plugin |<br>| Code commit checking | Requirements | Check for sensitive information in changes before pushing them to the code silo. If suspicious content is found, it notifies the developer and blocks the commit. | Local Commits | Security Issues and Warnings Detected | Code Warehouse Security Plugin | Static Application Security Testing | Static Application Security Testing | Static Application Security Testing | Static Application Security Testing<br>| Static Application Security Testing | Requirements | Perform Static Analysis Checks on Software Systems | Source Code, Known Security Issues and Vulnerabilities | Static Inspection Reports and Recommendations for Fixes | Static Analysis Tools | Static Analysis Tools</p><p>From this table we can see that the appropriate static analysis tests need to be completed at the following test points during the development phase of the code:</p><ul><li>In the IDE, through the IDE security plug-in, security inspections need to be completed for code that needs to be submitted;</li><li>In the code warehouse, through the security plug-in, the code submission scanning, which is often referred to as “access control”;</li><li>In the build, through the static analysis tool, to complete the static application security testing and scanning (SAST).</li></ul><p>The DevSecOps Essentials Guide: Activities and Tools only gives a rough idea of the activity requirements and tools needed for the process testing points, and does not give a specific process fusion and how to select tools for the different testing points.</p><h2 id="3-OWASP-DevSecOps"><a href="#3-OWASP-DevSecOps" class="headerlink" title="3. OWASP DevSecOps"></a>3. OWASP DevSecOps</h2><p>The Open Worldwide Application Security Project (OWASP) is a non-profit foundation dedicated to improving software security. The Foundation is committed to improving software security through its community-led open source software projects, hundreds of chapters worldwide, tens of thousands of members, and local and global conferences.</p><p>The OWASP DevSecOps Guideline (OWASP DevSecOps Guideline) guides us on how to implement a security pipeline and use best practices, and describes the tools that can be used in this matter.</p><p>The guide also dedicates a special section to the pre-commit process in DevSecOps practices. This is shown in the figure below:</p><p>This diagram clearly gives the checking process for pre-commit and gives two types of checks that need to be done in pre-commit:</p><ul><li>Ensure that there are no password or key issues in the code;</li><li>The code follows the Linter rules.</li></ul><p>I can’t find a suitable Chinese translation for Linter, which is a small ball of lint or fibers formed on clothes after they are washed in a washing machine due to the friction of rolling the fibers together. In the past, people wanted to get rid of these extra “balls”, but later they invented a magic tool called Linter, which could remove these “balls” with a single roll.</p><p>In 1978, Stephen C. Johnson, working at Bell Labs, was debugging his C project when he thought why not make a tool that could tell him what was wrong with the code he was writing. This tool is also known as Linter. Linter is a static analysis tool, mainly used to find syntax errors, potential bugs, code style, etc. in the code. The various tools we commonly see named linter are this type of static checking tool. Almost every language has a Linter tool for its own language, such as the familiar:</p><ul><li><p>The role of Linter is given in the OWASP DevSecOps Guide:</p><ul><li>Detect errors in code and errors that could lead to security vulnerabilities;</li><li>Detect formatting or styling issues and make code more readable, resulting in more secure code;</li><li>detects suggested best practices;</li><li>can improve the overall quality of the code;</li><li>Maintaining code is easier because everyone follows the same linting rules.</li></ul></li><li><p>The OWASP DevSecOps Guide defines static inspection tools as Linter and Advanced Static Inspection Tools, depending on the problem being inspected:</p><ul><li><p>Linter tools are the most basic form of static analysis. Using the linter tool helps to identify common errors such as:</p><ul><li>Array index out of bounds;</li><li>Null pointer dereferences;</li><li>(potentially) dangerous data type combinations;</li><li>Inaccessible code (dead code);</li><li>Non-portable structures.</li></ul></li><li><p>Advanced static analysis tools. Advanced static analysis tools typically provide:</p><ul><li>Pattern-based inspections.</li><li>Quality and complexity metrics;</li><li>Developer-oriented best practice recommendations;</li><li>Support for a wide range of safety and security-focused coding standards;</li><li>Support for multiple safety- and security-focused coding standards; + Used to develop safety-critical applications, e.g., out-of-the-box authentication.</li></ul></li></ul></li></ul><p>The OWASP DevSecOps Guide gives differences in the types of defects checked by different tools, mainly to illustrate that different types of inspection tools need to be configured at different points of detection.</p><h2 id="4-Security-Left-Shift"><a href="#4-Security-Left-Shift" class="headerlink" title="4. Security Left Shift"></a>4. Security Left Shift</h2><p>Capers Jones in “Application Software Measurement: A Comprehensive Analysis of Productivity and Quality” explains that from a software engineering practice perspective, it shows that most problems are introduced during the coding phase, and also that as defects are discovered later in the development process, the more expensive they are to fix.</p><p>So we hope that by integrating testing after development into the development process, we can effectively reduce the cost of fixing defects introduced during development.</p><ul><li><p>Testing shifts left to the development phase  </p></li><li><p>Testing shifts left, defects are reduced in the development phase</p></li></ul><p>After the concept of DevSecOps was introduced, it was natural to come up with the concept of <strong>“secure left shift “</strong>. “Security Left Shift” (as defined in the OWASP DevSecOps Guide) is an approach or solution that embeds security into the development process and considers security from the initial steps of application or system design. In other words, security is accountable to everyone engaged in the software development and operations process. Of course, security is a profession and we need highly skilled people to play security-related roles; but in this approach, any designer, software architect, developer, DevOps engineer, and … are responsible for security along with security personnel.</p><p>From this description we can see that the security left shift includes several specific activities:</p><ul><li>Security starts with design and continues throughout the process;</li><li>All personnel are involved in safety activities;</li></ul><p>Based on our previous understanding of the U.S. Department of Defense’s DevSecOps Foundation Guide: Activities and Tools, and OWADSP’s OWASP DevSecOps Guidance, there are three checkpoints during the coding phase of the development process: the IDE, the gatekeeper, and continuous build (CI). According to the concept of “safe left”, can we also <strong>“go all the way to the left “</strong>, put the static checking tool into IDE or gated, realize the left shift of the static checking tool, so as to remove the checking part of the continuous build (CI)? Is it possible to go all the way to the left?</p><h2 id="4-1-the-idiomatic-story-“Adapting-to-the-local-context”"><a href="#4-1-the-idiomatic-story-“Adapting-to-the-local-context”" class="headerlink" title="4.1. the idiomatic story “Adapting to the local context”"></a>4.1. the idiomatic story “Adapting to the local context”</h2><p>It’s been a while since I’ve told a story in a blog. China has a long history of sages and sages who have blended all sorts of truths and philosophies about people and their behavior into stories that are easy to understand and refined into enjoyable idiomatic stories that have a long history, so that ordinary people can remember the essence of these philosophies and pass them down from one generation to the next. In addition to the songs themselves, what people prefer to dig into are the various poignant stories contained in the songs.</p><p>At the end of the Spring and Autumn Period, the king of Chu listened to slander and killed Wu Zixu’s family, and Wu Zixu fled to the state of Wu. In order to avenge himself with the help of the state of Wu, Wu Zixu gave the king of Wu the following advice: in order to make the country rich and strong, and the people stable, first of all, we have to build a high wall, so that we can strengthen the defense force, so that other countries dare not invade the country. The king of Wu told him that in order to make the country rich and strong and the people stable, the first thing to do was to build high walls, so as to strengthen the defense force, so that other countries would not dare to invade the country. At the same time, we also need to develop agriculture, only with the development of agriculture, the country can be rich and strong, the people can live and work in peace and contentment, and the generals can have a full of</p>]]></content>
    
    
    <summary type="html">How to build three layers of protection for your code in software development</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>How to get an object instance of a class</title>
    <link href="https://www.nablepart.com/f0376546b02e/"/>
    <id>https://www.nablepart.com/f0376546b02e/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>How to get the object instances of a Java class? This class that is not necessarily a singleton , not necessarily provide static methods , not necessarily managed by Spring , and even can not modify the source code of the case , how do we get all the object instances of this class ? Here is an implementation based on JVMTI.</p><h2 id="Instructions-for-use"><a href="#Instructions-for-use" class="headerlink" title="Instructions for use"></a>Instructions for use</h2><p>First quote the maven dependency</p><p>&#96;&#96;xml<br>&lt;dependency<br>   <groupId>io.github.liubsyy</groupId><br>  <artifactId>FindInstancesOfClass</artifactId><br>   <version>1.0.1</version><br>&lt;&#x2F;dependency</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line">Then call the function **InstancesOfClass.getInstances(Class&lt;? &gt; targetClass)** to get all object instances of a class.</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">```java</span><br><span class="line">public class InstancesOfClass &#123;</span><br><span class="line">    /**</span><br><span class="line">     * native method : Returns all instances of a class.</span><br><span class="line">     * @param targetClass need to query the instances of Class</span><br><span class="line">     * @return</span><br><span class="line">     */</span><br><span class="line">    public static native Object[] getInstances(Class&lt;? &gt; targetClass);</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="Principle-of-implementation"><a href="#Principle-of-implementation" class="headerlink" title="Principle of implementation"></a>Principle of implementation</h2><p>Java does not have an interface to get instances based on class, you need to use the JVMTI interfaces IterateOverInstancesOfClass and GetObjectsWithTags.</p><p>First write a class that contains native methods</p><p>&#96;&#96;java<br>public class InstancesOfClass {<br>    &#x2F;**<br>     * native method : returns all instance objects<br>     * @param targetClass The Class to query for instances.<br>     * @return<br>     *&#x2F;<br>    public static native Object[] getInstances(Class&lt;? &gt; targetClass);<br>}</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br><span class="line">Then use javah to generate the .h file, and then write the implementation part in C++</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">```cpp</span><br><span class="line">#include &lt;jni.h</span><br><span class="line">#include &lt;jvmti.h&gt;</span><br><span class="line">#include &quot;com_liubs_findinstances_jvmti_InstancesOfClass.h&quot;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">static jvmtiIterationControl JNICALL objectInstanceCallback(jlong class_tag, jlong size, jlong* tag_ptr, void* user_data) &#123;</span><br><span class="line">    *tag_ptr = 1;</span><br><span class="line">    return JVMTI_ITERATION_CONTINUE;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">JNIEXPORT jobjectArray JNICALL Java_com_liubs_findinstances_jvmti_InstancesOfClass_getInstances(JNIEnv* env, jclass clazz, jclass targetClazz) &#123;</span><br><span class="line">    JavaVM* vm.</span><br><span class="line">    env-&gt;GetJavaVM(&amp;vm);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    jvmtiEnv* jvmti;</span><br><span class="line">    vm-&gt;GetEnv((void**)&amp;jvmti, JVMTI_VERSION_1_0);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    jvmtiCapabilities capabilities = &#123;0&#125;;</span><br><span class="line">    capabilities.can_tag_objects = 1;</span><br><span class="line">    jvmti-&gt;AddCapabilities(&amp;capabilities);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    jvmti-&gt;IterateOverInstancesOfClass(targetClazz, JVMTI_HEAP_OBJECT_EITHER,</span><br><span class="line">                                       objectInstanceCallback, NULL);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    jlong tag = 1;</span><br><span class="line">    jint count; jobject* instances; jlong tag = 1; jlint count</span><br><span class="line">    jobject* instances.</span><br><span class="line">    jvmti-&gt;GetObjectsWithTags(1, &amp;tag, &amp;count, &amp;instances, NULL);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    printf(&quot;Found %d objects with tag\n&quot;, count);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    // Convert jobject* to jobjectArray and return it.</span><br><span class="line">    jobjectArray result = env-&gt;NewObjectArray(count, targetClazz, NULL);</span><br><span class="line">    for (int i = 0; i &lt; count; i++) &#123;</span><br><span class="line">        env-&gt;SetObjectArrayElement(result, i, instances[i]); &#125;</span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">    jvmti-&gt;Deallocate((unsigned char*)instances);</span><br><span class="line">    return result; &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Then compile the cpp source code with gcc&#x2F;g++, generate the corresponding dynamic link libraries .so, .dylib and .dll under linux&#x2F;mac&#x2F;windows, load the corresponding local link libraries through System.load(), and finally call <strong>InstancesOfClass.getInstances(Class). &lt;? &gt; targetClass)</strong> method.</p><p>See <a href="https://www.oschina.net/action/GoToLink?url=https://github.com%25">https://github.com/Liubsyy/FindInstancesOfClass</a> for the source code. 2FLiubsyy%2FFindInstancesOfClass), which contains the test case</p>]]></content>
    
    
    <summary type="html">How to get an object instance of a class</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Interpreting the IDC MarketScape report, Akamai is recognized as a global public cloud IaaS competitor on the road to growth!</title>
    <link href="https://www.nablepart.com/70bec5d610ee/"/>
    <id>https://www.nablepart.com/70bec5d610ee/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>After two years, IDC has once again released its IDC MarketScape: Worldwide Assessment of Public Cloud Infrastructure-as-a-Service Providers report, in which Akamai is recognized as a contender (pictured). In this report, IDC evaluated 13 cloud providers that offer services in all regions of the world and generate more than $100 million in Infrastructure-as-a-Service (IaaS) revenue in 2021.</p><p>Akamai’s inclusion in this first IDC MarketScape report in two years for global public cloud IaaS demonstrates the strength of Akamai’s cloud services, which are built on Akamai’s Edge Computing Network, one of the largest distributed networks in the world.</p><hr><p>**** extended reading on Akamai cloud-computing****</p><p>[For cloud services abroad, choose Akamai Linode!]</p><hr><h2 id="Passionate-user-base"><a href="#Passionate-user-base" class="headerlink" title="Passionate user base"></a>Passionate user base</h2><p>In its provider profile on Akamai, IDC MarketScape states, “In February, content delivery network (CDN) provider Akamai announced plans to acquire Linode for $900 million, immediately becoming a prominent new force in the public cloud IaaS space.” The report also acknowledges that “if there’s one thing Akamai has, it’s a passionate user base. At the time of the acquisition, Linode had more than 150,000 active customers.”</p><p>The report goes on to note that Akamai “now offers a wide range of open source compute, storage, networking, database and other middle-tier software and development tools. This lineup can be combined with Akamai’s CDN, serverless and Web security services.”</p><h2 id="Poised-for-Further-Success"><a href="#Poised-for-Further-Success" class="headerlink" title="Poised for Further Success"></a>Poised for Further Success</h2><p>IDC MarketScape also noted, “By the end of 2023, Akamai will add new Linode data centers in North America, Latin America, EMEA, and Asia Pacific. This expansion complements and integrates with Akamai’s globally distributed network, which currently consists of 4,200 points of presence in 135 countries around the world.”</p><p>Looking ahead, the report states, “The combination of Akamai and Linode should now be able to meet the needs of a broader audience for cloud computing by offering a broad portfolio of cloud, CDN and security products. In a world where business models in many industries depend on the reliable delivery of rich content, Akamai appears poised for even greater success with IaaS.”</p><blockquote><p>“Spending on public cloud IaaS has also seen significant growth, increasing 35.6% to $91.3 billion by 2021.”</p></blockquote><h2 id="COVID-19-How-to-drive-cloud-growth"><a href="#COVID-19-How-to-drive-cloud-growth" class="headerlink" title="COVID-19 How to drive cloud growth"></a>COVID-19 How to drive cloud growth</h2><p>Since the release of the last IDC MarketScape based on public cloud IaaS in 2020, IDC has observed significant changes in the cloud marketplace as hyperscalers and new players such as Akamai are expanding their services into new areas around the world.</p><p>While IDC recognizes that changing business realities during the COVID-19 pandemic triggered a partial migration to public cloud IaaS, the report also concludes that there are “better, richer choices, thanks to strategic decisions made by providers.” Some of those changes include:</p><ul><li>Innovative partnerships between local software providers and public cloud IaaS providers.</li><li>Single-focused cloud providers entering areas such as adding compute to storage</li><li>Large independent software providers moving operations to the public cloud and catering to customers who want their applications and data to run close to home for privacy and regulatory reasons.</li><li>Multi-cloud continues to evolve as the preferred deployment model for customers due to service selection and provider management, etc.</li></ul><p>Spending on public cloud IaaS also shows significant growth, growing 35.6% to $91.3 billion in 2021 IDC’s analysts do not see a reversal of the trend: “In fact, IDC’s estimates show that spending on public cloud IaaS will exceed spending on traditional infrastructure and private clouds combined over the next several years. “</p><blockquote><p>“The latest IDC MarketScape also highlights a number of key takeaways, including how to build a cloud strategy that drives innovation, saves money and delivers benefits.”</p></blockquote><h2 id="Key-Takeaways"><a href="#Key-Takeaways" class="headerlink" title="Key Takeaways"></a>Key Takeaways</h2><p>The latest IDC MarketScape also highlights some key takeaways, including how to build a cloud strategy that drives innovation, cost savings and effectiveness:</p><ul><li>Freedom for developers from restrictive rules set by a single cloud provider.</li><li>Allow organizations to compare prices and find the right fit for their budgets</li><li>Matching the right workloads to the right cloud</li></ul><h2 id="Finding-the-right-cloud-for-the-right-workloads"><a href="#Finding-the-right-cloud-for-the-right-workloads" class="headerlink" title="Finding the right cloud for the right workloads"></a>Finding the right cloud for the right workloads</h2><p>IDC MarketScape acknowledges that there will always be some traditional on-site infrastructure use cases for some businesses, “such as older, stable custom applications.”</p><p>Yes, other common issues that tie companies to on-site databases, including latency and security, can be addressed by Akamai, which supports and protects online businesses.The opportunity that Akamai has seized in this modern era of public cloud IaaS is to match specific workloads to specific clouds.</p><h2 id="Locating-Workloads"><a href="#Locating-Workloads" class="headerlink" title="Locating Workloads"></a>Locating Workloads</h2><p>As IDC MarketScape notes, “It’s critical to locate workloads not only by cost, but also by factors such as service adjacency, the provider’s broader ecosystem, and the provider’s commitment to interoperability and open standards.”</p><h2 id="Get-and-read-an-excerpt"><a href="#Get-and-read-an-excerpt" class="headerlink" title="Get and read an excerpt"></a>Get and read an excerpt</h2><p>Excerpts from IDC MarketScape: Global Assessment of Public Cloud Infrastructure-as-a-Service Providers are available for free download. If you have questions about migrating to an IaaS platform, we’d be happy to set up a meeting with one of Akamai’s cloud experts to go over your questions with you.</p><hr><ul><li><ul><li><ul><li>Does this article feel like a good read? Want to try it out for yourself on the Linode platform right away? Don’t forget to sign up now and get $100 worth of free credits to try out the features and services described in this article for yourself.</li></ul></li></ul></li></ul><p>[Akamai is the cloud service of choice!]</p><p>Follow <strong>Akamai</strong> to learn about the highly available MySQL&#x2F;MariaDB reference architecture and sample applications.</p><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p>]]></content>
    
    
    <summary type="html">Interpreting the IDC MarketScape report, Akamai is recognized as a global public cloud IaaS competitor on the road to growth!</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Linode Live Migration Explained</title>
    <link href="https://www.nablepart.com/16bd070ea9b3/"/>
    <id>https://www.nablepart.com/16bd070ea9b3/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>When developers deploy workloads to cloud computing platforms, they often don’t need to think about the underlying hardware that runs those services. Hardware maintenance and physical constraints are often invisible in the idealized image of the cloud, yet hardware inevitably requires maintenance from time to time, which can lead to downtime. To avoid such downtime being passed on to our customers and to truly realize the promise of the cloud, Linode offers a tool called Live Migration.</p><p>With Live Migration, Linode instances can be moved between physical servers without service interruption. When moving Linode instances through the Live Migration tool, the migration process is completely invisible to the processes running in the Linode instance. If the hardware of one host requires maintenance, all Linode instances on that host can be seamlessly transferred to another host through live migration. Once the migration is complete, the physical hardware can begin to be repaired, with no customer-impacting downtime.</p><p>This has become an almost defining technology and a turning point between cloud and non-cloud technologies. In this article, we will delve into the details behind this technology.</p><hr><ul><li><ul><li><ul><li>To celebrate Linode joining the Akamai Solutions family, sign up for Linode now and get $100 worth of free access to as many services as you want from the Linode Cloud Platform. Click here to learn more and sign up today ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ **</li></ul></li></ul></li></ul><hr><h2 id="Live-Migration-Works"><a href="#Live-Migration-Works" class="headerlink" title="Live Migration Works"></a>Live Migration Works</h2><p>Similar to most new projects, Linode’s live migration started like this: a lot of research, a series of prototypes, and a lot of help from colleagues and management. Our first step was to investigate how live migration would be handled by QEMU, a virtualization technology used by Linode and a feature of QEUM. So our team’s focus was to bring this technology to Linode, rather than reinventing a similar technology.</p><p>So how exactly is live migration technology implemented in a QEMU way? The whole process is divided into the following four steps:</p><ol><li>start the target QEUM instance, which has exactly the same parameters as the source QEUM instance to be migrated.</li><li>Perform a live migration of the disks. Any changes made to the disk contents during the data transfer are also committed to the target disk.</li><li>live migration of memory data. Any changes made to the contents of memory during the migration process are also committed to the target memory. If disk contents are changed during this process, the changes are also committed to the target QEUM instance’s disks.</li><li>Execute the cut point. The source and target QEMU instances will pause when QEMU confirms that there are enough memory pages to safely perform the cutover.QEMU copies the last few pages of memory data and machine state, which consists of the CPU cache and the next CPU instruction. QEMU will then get the target up and running so that the target instance can resume running from the state it was in when the source instance stopped.</li></ol><p>These steps summarize the execution of a QEMU live migration. However there is still a need to specify exactly how the target QEMU instance should be started by including many manual operations. In addition, each operation in the above process must be executed at the correct time.</p><h2 id="Linode’s-approach-to-real-time-migration"><a href="#Linode’s-approach-to-real-time-migration" class="headerlink" title="Linode’s approach to real-time migration"></a>Linode’s approach to real-time migration</h2><p>After analyzing the techniques already implemented by QEMU developers, it is time to think about the way we will actually give the live migration to Linode, and this answer is the main focus of our work.</p><p>In step 1 of the live migration workflow, the target QEMU instance needs to be started in order to accept the incoming migration connection. In implementing this step, our initial idea was to get the <a href="">configuration file</a> of the current Linode instance and subsequently apply it to the target machine. In theory this should be simple, but further reflection reveals that in practice it is much more complex. In particular, while a configuration file can tell us how a Linode instance was started, it does not necessarily provide a complete description of the complete state of the started Linode instance. For example, a user can connect a <a href="">block storage</a> device by hot-plugging it after the Linode instance has finished booting, but this is not recorded in the configuration file.</p><p>In order to create a QEMU instance on the target host, the currently running QEMU instance must be profiled. We profile a running QEMU instance by examining the <a href="">QMP</a> interface, which provides us with a wealth of information related to the layout situation of the QEMU instance, but it does not help us understand what is happening inside the instance from the perspective of the guest system. For example, for local SSDs and block storage, it can only tell us where the disks are linked to and which virtualized PCI slot the virtual disk is attached to. After querying QMP and examining and analyzing the QEMU interface, a Profile can be constructed to describe how to create an identical instance at the target location.</p><p>On the target computer, we will receive a complete description of what the source instance actually looks like, and can then faithfully rebuild the instance at the target location, but there is one difference. This difference lies mainly in the fact that the target QEMU instance uses an option at startup that allows QEMU to accept incoming migrations.</p><p>At this point, the process of documenting the live migration is essentially complete, and it’s time to look at how QEMU implements these operations. the QEMU process tree consists of a control process and multiple worker processes, one of which is responsible for tasks such as returning QMP calls or handling the live migration, while the others need to be mapped one-to-one to a guest CPU. the guest environment is isolated from the QEMU side of the functionality and behaves similarly to a separate system. The specific behavior is similar to that of a standalone system.</p><p>In this sense, we need to deal with three layers:</p><ul><li>Layer 1 is the management layer;</li><li>Layer 2 is part of the QEMU process that handles all operations;</li><li>Layer 3 is the actual guest layer, responsible for interacting with Linode users.</li></ul><p>Once the target instance is up and ready to receive incoming migrations, the target hardware will tell the source hardware to start sending data. The source will start processing upon receiving this signal and will tell QEMU in software to start transferring disk contents. The software autonomously monitors the progress of the disk transfer to check if the transfer operation is complete and automatically starts to migrate the memory contents after the disk transfer is complete. At this point, the software still monitors the progress of the memory migration and automatically switches to the cutover mode when the memory migration is complete. The whole process is done through Linode’s <a href="">40Gbps network</a>, so network operations can be done quickly.</p><h2 id="Cutover-The-Critical-Link"><a href="#Cutover-The-Critical-Link" class="headerlink" title="Cutover: The Critical Link"></a>Cutover: The Critical Link</h2><p>The cutover operation is the most important part of the real-time migration process, and only when it is understood can the real-time migration operation be fully understood.</p><p>In the cutover point state, QEMU has confirmed that it is ready for all preparations and can be cutover and run on the target computer. The source QEMU instance puts both ends on hold, which means:</p><ol><li>the guest system is “time-stopped”. If the guest system is running a time synchronization service (such as NTP), NTP will automatically resynchronize the time after the migration is complete. This is because the system clock will fall behind by a few seconds.</li><li>Network requests stop. If the network request is a TCP request (e.g., SSH or HTTP), there is essentially no perceptible connection interruption; if the network request is a UDP request (e.g., streaming video), a small number of dropped frames may result.</li></ol><p>Since both time and network requests are stopped, we would like the cutover to be completed as quickly as possible. However there are a few checks that need to be made to ensure a successful cutover:</p><ul><li>Ensure that the live migration completes smoothly and without errors. In case of an error, a rollback is performed to unsuspend the source Linode instance from further operations. We experimented a lot with this and resolved many errors during development, and while this caused a lot of headaches for us, they were eventually resolved successfully.</li><li>Ensuring that the network on the source instance was shut down and properly connected on the target instance.</li><li>Make it clear to the rest of our infrastructure which physical computer the migrated Linode instance is running through.</li></ul><p>Due to the limited time available for the cutover process, we would like to do the above as quickly as possible. Once these issues have been resolved, you can proceed with the cutover. The source Linode instance will automatically receive the “cutover complete” signal and get the target instance up and running. The target Linode instance resumes running from the state in which the source instance was suspended. The rest of the contents of the source and target instances are cleaned up. If the target Linode instance needs to be live migrated again at some point in the future, the above steps are repeated.</p><h2 id="Edge-case-overview"><a href="#Edge-case-overview" class="headerlink" title="Edge case overview"></a>Edge case overview</h2><p>Most of the process of live migration was straightforward to implement, but the development of the feature itself was extended considerably after taking the edge case into account. The successful completion of this project was due in large part to the management team, who believed in the great vision of the tool and provided all the resources needed to accomplish the task, and of course the large number of employees who believed in the successful completion of the project.</p><p>We have encountered many edge cases in these areas:</p><ul><li>Coordination efforts related to live migration by developing in-house tools for Linode customer support staff and hardware operations and maintenance teams. These tools were more similar to other tools of the same type that we were using at the time, but there were slight differences and we put a lot of development work into them:<ul><li>The tool had to be able to automatically examine all the hardware facilities inside the data center and thus determine which hosts could be the best targets for each Linode instance that needed to be migrated. Relevant specifications to consider when making this selection and decision include available SSD storage space and memory allocation.</li><li>The physical processor of the target computer must be compatible with the incoming Linode instance. In particular, the CPU must have certain features (which we will refer to as CPU tags) that are essential for the software that the user is running. One such feature is AES, for example, which provides hardware-accelerated encryption-based capabilities. The target computer CPU for the live migration must support the same CPU tag as the source computer. We found this to be an extremely complex edge use case, and the approach we took is described below.</li></ul></li><li>Gracefully handle failures including end-user intervention or loss of network connectivity during the live migration process. These are also described in more detail below.</li><li>Staying abreast of changes to the Linode Platform itself, which is an ongoing, long-term process. For each current and future feature supported by the Linode platform, we need to ensure that these features are compatible with live migration. For more information, please continue reading below.</li></ul><h2 id="Failure-Handling"><a href="#Failure-Handling" class="headerlink" title="Failure Handling"></a>Failure Handling</h2><p>There is a topic in software that is rarely discussed: handling failure gracefully. Software should at least be able to “run”. In order to achieve this, a lot of development work is often required, and the same is true for the development of live migration functionality. We spent a lot of time thinking about what to do if the tool doesn’t work, and how to gracefully handle the situation. We considered a number of scenarios and identified specific ways to respond:</p><ul><li>What if a customer wants to access a feature of Linode from <a href="">Cloud Manager</a>? For example, the user might restart Linode or connect Block Storage Volume for the instance.<ul><li>Solution: The customer is perfectly capable of doing this. The live migration would be interrupted and the processing could not continue. This handling is appropriate because live migration can be retried later.</li></ul></li><li>What if the target Linode fails to start?<ul><li>Solution: The source hardware will be notified and another hardware will be automatically selected within the data center via a specially designed internal tool. The Ops team will also be notified so that the failed target hardware can be investigated. This has happened before in production environments and our live migration can handle it without any problems.</li></ul></li><li>What if network connectivity is lost during migration?<ul><li>Solution: Autonomously monitor the progress of the live migration and if no progress has been generated in the past minute, the live migration will be canceled and the Ops team will be notified. This has never happened outside of a test environment, but we are well prepared for this scenario.</li></ul></li><li>What happens if the rest of the Internet is disconnected, but the source and target hardware are still running and communicating, and both source and target Linode instances are running normally?<ul><li>Solution: If the live migration has not proceeded to the critical section, the live migration will be stopped and retried at a later time.</li><li>If it has progressed to the critical section, the migration will continue. This is important because the source Linode has been suspended and the target Linode needs to be in a started state to continue the resume operation.</li></ul></li></ul><p>These scenarios have been simulated in a test environment and we believe that the above behaviors are also the best responses for different situations.</p><h2 id="Keeping-pace-with-technology-changes"><a href="#Keeping-pace-with-technology-changes" class="headerlink" title="Keeping pace with technology changes"></a>Keeping pace with technology changes</h2><p>After hundreds of thousands of successful live migrations, we can’t help but wonder, “When will the development of live migration end?” Over time, the technology of live migration will become more widely used and will continue to be refined, so it seems like the project will go on forever. One way to answer this question is to consider when most of the work on the project will end. The answer is also simple: our work will continue for a long time to come in order to get reliable, trustworthy software.</p><p>Over time, new features will be added to Linode, and we may have to continue working to ensure that live migrations are compatible with those features. Introducing some new features may not require new development work around live migration, but we may still need to test that the feature works as expected. For some features, the necessary compatibility testing and work around live migration may need to be done in the early stages of development.</p><p>Similar to almost all other software, there is always a better way to implement the same thing through continuous research. For example, in the long run, developing a more modular integration approach for the live migration functionality would certainly reduce the maintenance burden. Or we might even be able to incorporate live migration related functionality into the underlying code, making it an out-of-the-box Linode feature.</p><p>Our team has considered all of these options and is confident that the tools that drive the Linode platform are alive and well, and will continue to work to evolve and develop them.</p><hr><p>Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p>]]></content>
    
    
    <summary type="html">Linode Live Migration Explained</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Microservices and Domain Driven Design, Architecture Practice Summary</title>
    <link href="https://www.nablepart.com/cc501e5af8c8/"/>
    <id>https://www.nablepart.com/cc501e5af8c8/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>What kind of architecture is worthy of build-to-fly changes?</p></blockquote><h2 id="I-Software-Complexity"><a href="#I-Software-Complexity" class="headerlink" title="I. Software Complexity"></a>I. Software Complexity</h2><h2 id="1-Reasons-for-Complexity"><a href="#1-Reasons-for-Complexity" class="headerlink" title="1. Reasons for Complexity"></a>1. Reasons for Complexity</h2><p>If there is a continuous iteration cycle of the software system, then the complexity of the business, technology, architecture will straighten up, the corresponding development difficulty will also increase, can be summarized in one sentence the root cause: the only constant is change;</p><ul><li>Business changes: the root cause of complexity, in the process of multi-version adaptation of multi-end code rapid expansion;</li><li>Data changes: data with the changes and development of the business, the accumulation of continuous precipitation, the need to do horizontal and vertical management;</li><li>Technical upgrades: technical components may be due to loopholes, or a better solution to the problem, uninterrupted upgrade version;</li><li>Personnel changes: Once the module developers have turned over, the change of hands will bring differences in the style of the code;</li><li>Mindset ups and downs: Continuously responding to complex problems, but a smooth mindset is hard to sustain and is a factor in staff turnover;</li></ul><p>Responding to complex changes has always been the core of the software engineering difficult problem, how to use smaller architectural changes to cope with larger business changes, is often said in the design: high cohesion, low coupling; also need to add a very important point: from the technical level alone is unable to continue to solve the complexity of the problem, but also need to define the process from a management perspective standardize a variety of solutions is the whole department to continue to face the matter.</p><h2 id="2-dealing-with-complexity"><a href="#2-dealing-with-complexity" class="headerlink" title="2, dealing with complexity"></a>2, dealing with complexity</h2><p>Whether it is often referred to as design patterns, principles, object-oriented, or architecture commonly used clusters, microservices, domain-driven, etc., are seeking a more reasonable program to respond to changes in the business; but there is no once-and-for-all solution to do a certain degree of forward-looking design to anticipate the business, but also to avoid excessive design impact on the progress of the business; this requires the R &amp; D team to have a certain degree of business height and technical This requires the R&amp;D team to have a certain level of business and technical depth:</p><p>In the process of system implementation, the need for in-depth analysis and understanding of the business, and constantly optimize the technical level of the solution; for example, the idea of microservices is to achieve low-coupling between the business blocks by means of split, domain-driven design to achieve a high degree of cohesion of the various business logic; the following practice around the two ways to go to a detailed analysis.</p><h2 id="Two-microservice-architecture"><a href="#Two-microservice-architecture" class="headerlink" title="Two, microservice architecture"></a>Two, microservice architecture</h2><h2 id="1-Architecture-Design"><a href="#1-Architecture-Design" class="headerlink" title="1. Architecture Design"></a>1. Architecture Design</h2><p>System architecture design is an extremely complex thing, in the work of these years have experienced the following stages: single-service, multi-service clusters, microservices, continuous integration; in the last 2 years the more stable selection of microservices + automated integration model:</p><p>Think about the logic of its essential changes, that is, in order to cope with more complex business systems; regardless of the business split or model design, are constantly realizing the principle of <strong>high cohesion and low coupling</strong>; to reduce the impact of the correlation between the business, separating the business and the high degree of coupling of technology.</p><h2 id="2-Business-Scenarios"><a href="#2-Business-Scenarios" class="headerlink" title="2. Business Scenarios"></a>2. Business Scenarios</h2><p>Here we first look at a classic business scenario: e-commerce transactions; based on the microservice architecture of the e-commerce transaction scenarios, usually involves at least the following core services: transactions, accounts, orders, commodities, warehousing, logistics;</p><p>Standing in the business perspective, modular split and management, combined with continuous integration of components, usually can easily cope with a variety of complex business scenarios, but there is no real sense of once-and-for-all means of business changes brought about by a variety of problems will always be brainless to promote the development of a more reasonable solution to find;</p><p>In a complete e-commerce trading scenario, in fact, the real microservices involved are far more than just a few in the figure, intertwined in the Trade service associated with a number of other services, in the MVC hierarchical management, there will not be a greater risk at the initial stage, but the business, once after the upgrading of the transformation of the multi-version and the existence of version-compatible requirements, it will give people a feeling of extreme confusion and insubstantiality;</p><p>If the comprehensive ability of team members is high, and version has enough time to design and optimization, this problem can be properly resolved, if there is a time-critical task heavy situation, the ensuing ** pressure will continue to be in the development and testing between the back and forth across the jump **;</p><p>Solve the related business scenarios of R &amp; D know that refactoring plus continuous integration capabilities, combined with rigorous testing, can cope with the constant changes in the business; but in the process of version compatibility will still lead to the expansion of the code in the project to the fly, especially the emergence of a mid-field replacement, will allow the personnel to take over the situation in the buried and leave, resulting in a drastic struggle of the mind.</p><h2 id="3-Problem-Analysis"><a href="#3-Problem-Analysis" class="headerlink" title="3. Problem Analysis"></a>3. Problem Analysis</h2><p>In the MVC architecture model, the project usually carries out the following layered management: control layer, service layer, persistence layer, storage layer; the service layer in a specific complex scenario will do the refinement of the split, such as third-party docking, the secondary packaging of commonly used middleware:</p><p>For the players in the complex business line to compete in the Mvc layered model of the defects are well aware of the Service layer to focus on a large number of complex logic, usually the core business block there will always be a few lines of code over a thousand lines of implementation logic, regardless of what ideas and models to split encapsulation, it is very difficult to solve the layer of the expanding expansion of the expansion of the problem.</p><h2 id="4-process-oriented"><a href="#4-process-oriented" class="headerlink" title="4, process-oriented"></a>4, process-oriented</h2><p>In MVC layering, the process code is extremely obvious, usually based on database tables and relationships, mapping and building related entity objects, these entity objects do not have specific behavior and logic, just as the carrier of data and structure:</p><p>From the object-oriented definition of the class to see: attributes and behavior; and in the MVC model, the majority of entities are just as the structure of the data definition of the in- and out-references, which can be understood as data containers, in the MVC between the layers of the constant handling and processing.</p><h2 id="Third-domain-driven-design"><a href="#Third-domain-driven-design" class="headerlink" title="Third, domain-driven design"></a>Third, domain-driven design</h2><p>Compared to the MVC layered design, Domain-Driven Design (DDD) for the realization of complex business systems, proposed a more reasonable solution, the DDD model involves a lot of terminology and abstract concepts, you can refer to <code>EricEvans</code> books, this article only describes the core concepts in practice.</p><h2 id="1-Separation-Model"><a href="#1-Separation-Model" class="headerlink" title="1. Separation Model"></a>1. Separation Model</h2><p>DDD model in the layered design, divided into four core layers: access layer, application layer, domain layer, infrastructure layer; note that this is simply standing in the service side of the conventional architectural perspective to look at, it is clear that the separation of MVC pattern in the service implementation layer of logic:</p><p>The domain layer is the key to encapsulate the complexity of the business, the application layer to provide core support for business management; the whole model is also more vertical thinking, effectively alleviate the phenomenon of single-layer complexity over the phenomenon; from the model design alone, in the project based on the layering to manage the code packages, but also to make the design of each layer more clear and independent.</p><h2 id="2-Design-Ideology"><a href="#2-Design-Ideology" class="headerlink" title="2. Design Ideology"></a>2. Design Ideology</h2><p>Domain-driven design is not a simple hierarchical management model, involving a lot of abstract logic and terminology, such as: domain, bounded context, entity, aggregation, value objects and so on;</p><p>**2.1 Domain</p><p>Domain can be understood as an ensemble of problems to be solved in a business scenario, a constraint with scope and boundaries; the domain can be split into multiple sub-domains, which are usually described as: core domain, support domain, and generic domain:</p><p>Regarding the division of sub-domains is also with reference to business attributes, the core domain can be understood as the most critical business scenarios and requires resource tilting to cope with its continuous development; the support domain can be understood as a relatively stable business; the generic domain is biased towards public capabilities at the level of the system architecture; the realization of business partitioning through the splitting of domains is in line with the idea of the splitting of microservices, and the two models are relatively unified from the business point of view;</p><p>**2.2 Boundary Contexts</p><p>One of the most obscure abstract concepts in DDD, the application of boundaries to a particular model, can be understood by borrowing an analogy from the original text: a cell exists because the cell membrane delimits what is inside and what is outside the cell, and determines what substances can pass through the cell membrane:</p><p>The definition of bounded context involves the idea of granularity, that is, each granularity should have independence; such as the above figure warehousing business, the deployment of services and warehousing sub-domain, warehousing context can be made into a one-to-one correspondence, or in the warehousing sub-domain were defined: warehouses and shelves in the context of the two; here there is a great deal of flexibility, and there is no real standard can be referred to.</p><p><strong>2.3 Mapping Relationship</strong></p><p>Do a good job of delineating the boundaries of the context, clarify the relationship between the various contexts, and clarify the order of dependencies in the business scenario, so that you can better promote the development process to the ground; for the description of the relationship between the context is also much more than just these diagrams, there are also shared kernel, cooperation and so on:.</p><ul><li>Upstream and downstream (U-upstream, D-downstream): describes the relationship when the context is invoked, the service invoker is D, and the service provider is U;</li><li>Anticorruption-Layer (ACL for short): a layer that encapsulates the context when it interacts, providing checksums, adaptations, transformations, etc. for actions;</li><li>Open-Host-Service, Published-Language (Open-Host-Service abbreviated OHS, Published-Language abbreviated PL): defines the access protocol;</li></ul><p>During context interaction, the preservation layer can maintain context isolation and independence, ensuring that the caller does not directly depend on the service provider, thus realizing dependency decoupling between different contexts; at the same time, this can also lead to a large number of object transformation actions;</p><p><strong>2.4 Modeling Design</strong></p><p>Sub-domain and boundary line contexts complete the splitting and chunking of the business so as to carry out partitioning; based on the antiseptic layer to reduce the coupling degree of each boundary context; the aggregation idea ensures the solution cohesion of the business problem; the strict layering model realizes the decentralization of the service support capability;</p><ul><li>Anticorruption-Layer (Anticorruption-Layer): a layer that encapsulates context interactions;</li><li>Domain-Layer: a layer responsible for the design and implementation of domain logic in a layered architecture;</li><li>Domain-Service: encapsulated to a domain service when the behavior does not identify the attributed entity;</li><li>Aggregate: a collection of related objects describing the core domain, often using the aggregation as a unit for data modification;</li><li>Entity (Entity): an object defined by an identity, not based on attributes, e.g. Uid identifies a user entity;</li><li>Value-Object: an object that describes features or attributes but has no identity;</li><li>Factory (Factory): encapsulates the complex creation logic and types of objects;</li><li>Repository (Repository): the storage, caching, search and other resources encapsulation mechanism, corresponding to the domain model;</li></ul><p>Domain model of the core pursuit of the goal: high cohesion, low coupling; more abstract, complex design ideas, also means that the implementation of the landing is more difficult, but it can not be denied that the domain model as a solution to complex business, the logic is indeed more reasonable.</p><h2 id="3-Engineering-Practice"><a href="#3-Engineering-Practice" class="headerlink" title="3. Engineering Practice"></a>3. Engineering Practice</h2><p>In the practice of code engineering, the domain model can integrate different sub-domains into their respective services, and can also be isolated and maintained by multiple modules (Module) in a service, i.e., a module corresponds to a bounded context;</p><p>The isolation of business issues in the form of sub-modules, sub-layers and sub-packages is a basic means of code engineering, here is just a description of the organization, in the actual development, according to the dependency order of the class libraries to unpacking management;</p><p>In the execution process of the program, not all interactive commands need to go through the domain layer, in fact, most of the query commands in the business are more than the add, delete and change commands, so in the pure read data request, the application layer can bypass the domain layer to directly access the infrastructure layer, reducing a layer of data processing logic.</p><h2 id="IV-Practice-Summary"><a href="#IV-Practice-Summary" class="headerlink" title="IV. Practice Summary"></a>IV. Practice Summary</h2><p>Finally, to discuss some architectural practice experience, with the continuous development and upgrading of technology, to solve business problems provides great convenience, whether it is a single service in a variety of mature components, or distributed microservices system, or focus on the business management of the domain model; each architectural selection has its own applicable scenarios, and different selections mean that different realization costs;</p><p>In fact, when doing architecture selection, mature and experienced leaders, are extremely good at making compromises, that is, often referred to as a step back to broaden the sky; usually need to take into account the team’s comprehensive level of business needs and product design, of course, in the actual collaboration process of multiple parties are required to make relative concessions, but the quality of the requirements of the core business as well as the realization of the logic can not be discounted.</p>]]></content>
    
    
    <summary type="html">Microservices and Domain Driven Design, Architecture Practice Summary</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Plus and Minus for Techies Fighting Anxiety</title>
    <link href="https://www.nablepart.com/e570cce3b2c1/"/>
    <id>https://www.nablepart.com/e570cce3b2c1/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h3 id="I-Subtraction-Enhancing-focus-by-separating-topics"><a href="#I-Subtraction-Enhancing-focus-by-separating-topics" class="headerlink" title="I. Subtraction - Enhancing focus by separating topics"></a>I. Subtraction - Enhancing focus by separating topics</h3><blockquote><p>It should be said that most of the technical people have a sense of anxiety: “programmers 35 years old crisis”, “growth is too slow, work for 5 years is still a big head soldier”, “accidentally engaged in a production failure, I feel that I can not stay”, “responsible for this business seems to have no future” and so on. “, “responsible for the business does not seem to have a future” and so on.</p></blockquote><p>Anxiety comes from the uncertainty of the future and dissatisfaction with oneself. Recently, I read the book “The Courage to be Hated”, in which it is mentioned that anxiety&#x2F;low self-esteem is often due to the failure to do a good job of “separation of issues”, we tend to “other people’s trafficking in anxiety” (PUA, 35 years old crisis), imposed on their own feelings, and this kind of internal friction continues to eat away at our concentration (which could have been a good thing). This internal conflict is constantly eating away at our ability to focus (which we could be using to really improve ourselves).</p><p>Therefore, to combat anxiety, first, we need to separate the subject of “trafficked anxiety”, and second, we need to take the right posture, manage our own desires, and improve our “dissatisfaction” through spiraling “growth”. “The second is to take the right posture, manage your desires, and improve your dissatisfaction by spiraling into growth.</p><h3 id="Addition-Matching-Reasonable-Desires-through-Spiral-Growth"><a href="#Addition-Matching-Reasonable-Desires-through-Spiral-Growth" class="headerlink" title="Addition - Matching Reasonable Desires through Spiral Growth"></a>Addition - Matching Reasonable Desires through Spiral Growth</h3><h4 id="2-1-Match-your-cognition-and-skills-with-your-desires"><a href="#2-1-Match-your-cognition-and-skills-with-your-desires" class="headerlink" title="**2.1. Match your cognition and skills with your desires"></a>**2.1. Match your cognition and skills with your desires</h4><p>The essence of growth is to spiral your perceptions and skills <strong>Matching your perceptions and skills to your desires</strong> Desires come from the pursuit of a sense of value.</p><p>The sense of value is a very personal and subjective color, but also with the innate traits related, you have to fully understand your own good at it, to give yourself a suitable position at the moment. Architects are not necessarily suitable for TL&#x2F;CTO, and likewise TL&#x2F;CTO do not rely solely on solving complex technical problems to get to the top.</p><p>So, how to get to the top? Very recognized before the boss said a sentence: there is no pit, let yourself first become the carrot, choose a runway you recognize stick to it, others did not do a good job, you can always do a good job, and will take the initiative to put forward a better plan, when the pit appears, you are naturally the best carrot inside.</p><p>! <a href="https://pic1.zhimg.com/80/v2-62f185caf7fdc73bdbbc7c4159f2bd70_720w.webp"></a></p><h4 id="2-2-Be-wary-of-low-levels-of-hard-work"><a href="#2-2-Be-wary-of-low-levels-of-hard-work" class="headerlink" title="2.2 Be wary of low levels of hard work"></a><strong>2.2 Be wary of low levels of hard work</strong></h4><p>A notable characteristic of low levels is passivity, and repetition. Today I am mostly trapped in the information cocoon (you enjoy the pleasure of the headline Shaking 10s knowledge fast food, can not stop, you year after year, to complete a trivial project), you enter a relatively familiar pattern of repetition, repeating themselves every day, the most afraid of 1 year of work experience, repeat 10 years, commonly known as the “tool man! The worst thing you can do is to repeat yourself every day for 10 years after 1 year of experience, commonly known as the “toolman” or “porter”.</p><p>As said earlier, the same inside a pit, you can solve the problem that others can not solve, must not rely on repetition of effort and diligence, the need to have a new perception of the problem, and the skills to match.</p><p><strong>“If you have a hammer in your hand, all you’ll ever see is nails” - Munger</strong>*</p><p>! <a href="https://pic1.zhimg.com/80/v2-ec086b98af8dbf5701cbcbadedb09834_720w.webp"></a></p><p><strong>Counter-example #1: Taking a hammer to a nail</strong></p><p>Many people are good at familiar areas and past experiences to optimize, but do not stand at the source of the problem and higher perspective to analyze the problem, which often leads to one-sided, and short-sighted. For example, test architecture, many people are good at building a better tool than the original (to solve a better use of the problem), but lack of thinking about the nature of the problem to be solved (how to solve the coverage, how to improve the ROI of the test), then there will not be a system of methods oriented to different layered testing.</p><p>! <a href="https://pic2.zhimg.com/80/v2-c953b84d10eef7af3f54d3ce05345531_720w.webp"></a></p><p>(Difference between different thinking paths)</p><p><strong>Counter-example 2: Thinking about the essence is the only way to solve the problem fundamentally</strong></p><p>Only thinking oriented to the essence of the problem can bring about fundamental change, so how can we get to the essence of the problem? In fact, it is to be from the surface (figurative), to the abstract (universal principles) constantly penetrate the process.</p><ol><li><p><strong>find the trunk</strong>: is to think in higher dimensions, from the child nodes back up, and then from the root node down to traverse, in order to find more possibilities for solving the problem. Like just finding a horse that runs faster, then you won’t invent the automobile.</p></li><li><p><strong>Find the fulcrum</strong>: find the core of the key variables, solve it can bring fundamental change. Whether you are doing business, or technology, the fulcrum is the strategy, the leverage to solve the problem.</p></li><li><p><strong>Hypothesis and extrapolation</strong>: it is the process of constantly making hypotheses and arguments, not only to consider the present, but also to face the future.</p></li></ol><p>! <a href="https://pic1.zhimg.com/80/v2-0be130f76b6c660593ca5ad340cbd4d8_720w.webp"></a></p><p>(A case of technical capital loss)</p><p>! <a href="https://pic4.zhimg.com/80/v2-c80ed4e1cca3071597335c818db33997_720w.webp"></a></p><p>(Essence-oriented thinking)</p><h4 id="2-3-You-need-to-awaken"><a href="#2-3-You-need-to-awaken" class="headerlink" title="2.3 You need to awaken"></a><strong>2.3 You need to awaken</strong></h4><p>After some time, you no longer want to be held down and rubbed against the ground by business, so you feel confused and anxious. As a pursuing you, obviously you have realized the problem, your knowledge and skills, no longer match your goals and desires.</p><p>So you want to improve your ability, expand your horizons, get promotion opportunities, have a decent job, so big that you want to figure out the ultimate meaning of life, who am I? Where do I come from? Where am I going? The class philosophy of life.</p><h3 id="Third-the-key-ability-to-break-the-game"><a href="#Third-the-key-ability-to-break-the-game" class="headerlink" title="Third, the key ability to break the game"></a>Third, the key ability to break the game</h3><p>A long time ago, the CEO lucy summarized the three ability model of ali people (** heart, brain, body **), I think the summarization is very accurate, put at any time are not out of date.</p><p>! <a href="https://pic1.zhimg.com/80/v2-16edcb94642beafa5456f648907ae5bc_720w.webp"></a></p><h4 id="3-1-Heart-power-it-refers-to-the-ability-of-self-reflection-self-driven-independent-thinking"><a href="#3-1-Heart-power-it-refers-to-the-ability-of-self-reflection-self-driven-independent-thinking" class="headerlink" title="3.1 Heart power: it refers to the ability of self-reflection, self-driven, independent thinking"></a><strong>3.1 Heart power: it refers to the ability of self-reflection, self-driven, independent thinking</strong></h4><p>Many awesome people spend their lives going through choices, taking many detours, and fighting against all kinds of failures and setbacks. And not all of them are able to remain optimistic and aggressive in the midst of failures and setbacks; some stop moving forward, while others are able to grow stronger with each setback. “The real heroes are those who see the truth of life and still love it” -Roman Laurent. Recognizing the self and improving the mind is not an easy task.</p><p>Exercising mindfulness starts with opening one’s heart, staying curious, and being able to communicate with more people. Different levels and positions of responsibility have different perspectives, which will help you form a more comprehensive view of you and reflect on yourself from all angles. Usually people who are good at communicating will have a more comprehensive perception of people and things, and don’t get stuck in their own world.</p><p>In times of confusion, it is recommended to think more on a value (customer value, personal value) level rather than focusing on specific things. Self-proofing, doubting, and suffering will only greatly deplete your mind and energy, and you’re so firmly sucked in that there’s no way to focus. No matter how big or small something is, it actually has value, and doing it to the best of your ability and getting immediate feedback is what creates a positive cycle.</p><p>When we say that a person’s potential is unlimited, the logic behind it is that the value and meaning of life is defined by you; the desire for happiness, the desire for the sublime, all give you unlimited power. Meditatively, you will run in one direction and give it your all because you have a powerful desire for purpose. You can get up and keep going even after falling down because you recognize that it is not the past or the evaluation of others that determines the future, but the future that you define for yourself.</p><p>That’s why students are especially encouraged to participate in more complex, multi-team projects, which will greatly exercise your mental strength. Thinking of 18 years of departmental organization “concentric walk, Xuanzang Road”, it is hard to believe that I hiked 120 kilometers in four days and three nights, “ideals, perseverance, action”, whenever I look back, more than physical strength, more exercise is the strength of the heart, there is nothing more difficult than this, and there is no more advanced joy than this. There is nothing more difficult than this, and there is no more advanced happiness than this.</p><p>I’m not going to be able to do that. <a href="https://pic1.zhimg.com/80/v2-554c079e120d21db794c9bd2ea1ebc18_720w.webp"></a></p><p>! <a href="https://pic3.zhimg.com/80/v2-08e854c83779f3c79e7f90edf8707656_720w.webp"></a></p><p>(The most challenging thing, the highest pleasure)</p><h4 id="3-2-Brain-power-corresponds-to-the-power-of-thinking-and-the-resulting-professional-power"><a href="#3-2-Brain-power-corresponds-to-the-power-of-thinking-and-the-resulting-professional-power" class="headerlink" title="3.2 Brain power: corresponds to the power of thinking, and the resulting professional power"></a><strong>3.2 Brain power: corresponds to the power of thinking, and the resulting professional power</strong></h4><p>Thinking power, on the one hand, is the logical ability, we say that the person is very smart, to a large extent, the logical reasoning is clear, good at solving a technical problem (designing an algorithm, troubleshooting a stack). And the other hand is the systematization (structured) ability, determines whether thinking about the problem is comprehensive, the ability to make complex problems simple, such as problem definition, technology planning, domain abstraction, global architecture design. **So technical thinking power, is the ability of ZoomIn and ZoomOut in the technical field, representing the depth and breadth of thinking. **</p><p>Thinking is the method, need to be translated into professional knowledge and ability, for technology, is the accumulation and precipitation in a technical field, and have their own output or representative work. On this person is not professional, not talk about the method, in fact, is to say that the person landed what is known as masterpieces.</p><h4 id="3-3-Physical-Strength-Corresponding-to-the-power-of-action-and-the-power-of-execution"><a href="#3-3-Physical-Strength-Corresponding-to-the-power-of-action-and-the-power-of-execution" class="headerlink" title="**3.3 Physical Strength: Corresponding to the power of action, and the power of execution **"></a>**3.3 Physical Strength: Corresponding to the power of action, and the power of execution **</h4><p>To quote Luo Xiang: “The farthest distance in the world is not the distance between Mount Everest and the Marianas Trench, but the distance between knowing and doing.”</p><p>Physical strength does not necessarily mean making brute force, but the decisiveness and self-driven power after having the heart and brain to think and think, transforming the idea into practical action, and the willingness to change. Second is the execution, is to formulate strategies, firm implementation, the ability to take the results, and through the positive and negative feedback, and constantly amend their own perception.</p><p>Knowing more knowledge and traveling more roads are not as sound as winning a battle. For new students, I generally recommend that they go to the front as soon as possible, to participate in the actual project, even if the head is broken, the growth is the fastest.</p><h3 id="Fourth-how-to-learn"><a href="#Fourth-how-to-learn" class="headerlink" title="Fourth, how to learn?"></a>Fourth, how to learn?</h3><h4 id="4-1-Cognize-yourself-more-important-than-cognitive-knowledge-points"><a href="#4-1-Cognize-yourself-more-important-than-cognitive-knowledge-points" class="headerlink" title="4.1 Cognize yourself, more important than cognitive knowledge points"></a><strong>4.1 Cognize yourself, more important than cognitive knowledge points</strong></h4><p>Understand their own shortcomings, determine the intention of learning, the problem is the best teacher, he will lead you, do not rely on a whim (such as a line of technical students, to learn corporate strategy). Secondly, more about themselves, such as a student’s expression ability is very poor, I do not recommend him to learn how to speak and debate, but to train the most basic “Pyramid Principle”, “Structured Thinking” and other bottom thinking (think clearly, in order to speak clearly). Another example is that some people are often glassy-eyed and weak in stress capacity, not necessarily because of poor execution, but probably from the wrong self-perception and evaluation, so how to improve the nature of the heart is very important.</p><p>Analyze where you need to improve, and at what level you need to improve. I met a student who was very hardworking and eager to improve himself, expecting to be responsible for more areas, but my advice to him was to improve his “structured thinking and expression” first, because no matter what projects he did, it was very difficult for people to communicate with him, and he himself also lacked the ability to summarize and refine his ideas.</p><p>I’m not sure how to summarize it, but I’m not sure how to summarize it. <a href="https://pic4.zhimg.com/80/v2-899dcbee5f73bde4999ef5d19c9d7f1f_720w.webp"></a></p><p>Meta-knowledge: It is the method and truth of social consensus, such as XXX principle, XXX method , it is with the industry is irrelevant, through reading, experience can be acquired, you can read more good books, official books.</p><p>Knowledge: Knowledge is very large, a bit like Wikipedia, related to all walks of life, not necessarily the standard, but a certain inspiration. Most of the books that people read and short videos that they brush up on fall into this category.</p><p>Invisible knowledge: it is the string and application of knowledge points to form a solution to a specific problem, which is the biggest difference between different people, and is usually difficult to learn through books, but rather through everyone’s inspiration-&gt;practice-&gt;verification.</p><p>So as a technical person, at different stages, we have to master the knowledge system:</p><p>! <a href="https://pic3.zhimg.com/80/v2-70e3c3d708dd19deacadfd1503be8faa_720w.webp"></a></p><p>(Taking payment platform technology as an example)</p><h4 id="4-2-Learning-is-not-to-get-information-but-to-internalize-it-into-your-own-understanding"><a href="#4-2-Learning-is-not-to-get-information-but-to-internalize-it-into-your-own-understanding" class="headerlink" title="**4.2 Learning is not to get information, but to internalize it into your own understanding **"></a>**4.2 Learning is not to get information, but to internalize it into your own understanding **</h4><p>First of all, don’t study in a fragmented and short time, but establish your own macro knowledge learning pulse. The era of information explosion, why short videos can catch you, because the story within 10s let you get a sense of inspiration, but if you are inspired by different stories 10 times a day, you can not summarize a one, two, three.</p><p>(Comparison of recommended books and columns, often need systematic organization and conception, logic and system is more complete. In addition to recommending classic books, representing the underlying meta-knowledge, popular and best-selling can be used as an aid to interest)</p><p>Secondly, learn to slow down when you are triggered to think, and stuck, such as a new concept, obscure logic, or a suggestion given to you by others. Your perceptions, and new concepts will conflict, do not rely on the experience within the boundaries to give a quick answer, but to ** get used to ask a few more why (5 why rule) ** until you find that most essential answer.</p><p>For promotions, knowledge points can be expanded by surprise, while competence cannot be achieved quickly, the reason is that people are used to thinking fast, and it is most natural to give an answer by relying on the intuition within the experience. In the case of promotion, rational thinking outside the boundaries cannot be acquired and expressed naturally by short-term acquisition.</p><h4 id="4-3-Learning-by-example-teaching-by-example-and-improving-knowledge-retention"><a href="#4-3-Learning-by-example-teaching-by-example-and-improving-knowledge-retention" class="headerlink" title="4.3 Learning by example, teaching by example, and improving knowledge retention"></a><strong>4.3 Learning by example, teaching by example, and improving knowledge retention</strong></h4><p>While it is enough to understand a truth by immediate inspiration, it is difficult and challenging to acquire a systematized knowledge.</p><p>The reason for this is that ordinary people are naturally receptive to figurative content (closest to experience and enlightening), whereas in the face of abstract knowledge understanding, the</p>]]></content>
    
    
    <summary type="html">Plus and Minus for Techies Fighting Anxiety</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Reader in Java in detail</title>
    <link href="https://www.nablepart.com/31008f4f52c4/"/>
    <id>https://www.nablepart.com/31008f4f52c4/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Preface"><a href="#Preface" class="headerlink" title="Preface"></a>Preface</h2><p>In the process of Java development, we often need to read the data in the file, and the reading of data requires a suitable class for processing.Java’s IO package provides a number of classes for reading and writing data, of which Reader is one of them. In this article, we will introduce Reader in Java in detail, and analyze its advantages and disadvantages and application scenarios.</p><h2 id="Abstract"><a href="#Abstract" class="headerlink" title="Abstract"></a>Abstract</h2><p>In this article, we will introduce the <code>Reader</code> class in Java in detail from the following aspects:</p><ol><li>Overview of the Reader class</li><li>Reader class code analysis</li><li>Application examples of the Reader class</li><li>Advantages and disadvantages of the Reader class</li><li>Introduction to the methods and source code analysis of the Reader class.</li><li>Test Cases of Reader Class</li><li>Summary and conclusion of the whole paper</li><li>Attached source code</li><li>Recommendations</li></ol><p>This article explains in detail the Reader in Java, aims to help developers better grasp the use of Reader.</p><h2 id="Reader-class"><a href="#Reader-class" class="headerlink" title="Reader class"></a>Reader class</h2><h2 id="Overview"><a href="#Overview" class="headerlink" title="Overview"></a>Overview</h2><p>The Reader class is an abstract class for reading character streams in Java. It is the superclass of all character input streams and provides basic functionality when reading character input streams.The Reader class is implemented by three main classes, InputStreamReader, FileReader and CharArrayReader.</p><h2 id="Source-code-analysis"><a href="#Source-code-analysis" class="headerlink" title="Source code analysis"></a>Source code analysis</h2><p>The <code>Reader</code> class is an abstract class whose source code is defined as follows:</p><p>public abstract class Reader implements Readable, Closeable {<br>    …<br>}</p><p>Among them, Reader implements two interfaces: <code>Readable</code> and <code>Closeable</code>. There is only one method defined in the <code>Readable</code> interface:</p><p>public interface Readable {<br>    int read(CharBuffer cb) throws IOException;<br>}</p><p>And there is only one method defined in the <code>Closeable</code> interface as well:</p><p>public interface Closeable extends AutoCloseable {<br>    void close() throws IOException; }<br>}</p><p>The purpose of these two interfaces is to provide methods for reading characters and closing resources, respectively.</p><h2 id="Application-Scenario-Examples"><a href="#Application-Scenario-Examples" class="headerlink" title="Application Scenario Examples"></a>Application Scenario Examples</h2><p>The Reader class is usually used to read data from a text file. For example, we often use BufferedReader is a subclass of Reader class, which is used to read the data in a text file line by line. In addition, Reader can also be used to read network data, read console input and other scenarios.</p><p>The following are a few examples of application scenarios using the Reader class, for students’ reference only:</p><h3 id="1-Reading-Text-Files"><a href="#1-Reading-Text-Files" class="headerlink" title="1. Reading Text Files"></a>1. Reading Text Files</h3><p>It is very common to use the FileReader class to read text files. For example, you can use the combination of <code>FileReader</code> and <code>BufferedReader</code> to read a text file and output it line by line:</p><p>&#x2F;&#x2F;1. Read a text file<br>    public static void testReadFile(){<br>        FileReader fileReader; BufferedReader bufferedReader<br>        BufferedReader bufferedReader.<br>        bufferedReader; bufferedReader; try {<br>            fileReader &#x3D; new FileReader(“. &#x2F;template&#x2F;fileTest.txt”); bufferedReader &#x3D; new BufferedReader(“.<br>            bufferedReader &#x3D; new BufferedReader(fileReader); bufferedReader; try { fileReader &#x3D; new FileReader(“.<br>            bufferedReader &#x3D; new BufferedReader(fileReader); String line;<br>            while ((line &#x3D; bufferedReader.readLine()) ! String line; while ((line &#x3D; bufferedReader.readLine()) !<br>                System.out.println(line); }<br>            }<br>            fileReader.close();<br>            bufferedReader.close(); } catch (IOException e); }<br>        } catch (IOException e) {<br>            e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); }<br>        }<br>    }</p><p>With the above case, our local demo, the result can be seen as follows:</p><p>! <a href="https://static001.geekbang.org/infoq/46/4601c49043ccf9f56359b5c5d0af8232.png"></a></p><h3 id="2-Reading-Network-Resources"><a href="#2-Reading-Network-Resources" class="headerlink" title="2. Reading Network Resources"></a>2. Reading Network Resources</h3><p>You can use InputStreamReader and URL class to read network resources, for example:</p><p>&#x2F;&#x2F;2. Read network resources<br>    public static void testReadURL() throws IOException {<br>        URL url &#x3D; new URL(“<a href="https://www.baidu.com/">https://www.baidu.com/</a>“);<br>        URLConnection conn &#x3D; url.openConnection();<br>        InputStream is &#x3D; conn.getInputStream();<br>        InputStreamReader isr &#x3D; new InputStreamReader(is); BufferedReader br &#x3D; new</p><pre><code>            String line; while ((line = br.read))    while ((line = br.readLine()) ! line; while ((line = br.readLine())) !        System.out.println(line); &#125;    &#125;    System.out.println(line); &#125;    isr.close(); isr.close(); isr.close(); isr.close()    isr.close(); isr.close(); &#125;&#125;public static void main(String\[\] args) throws IOException &#123;    testReadURL(); &#125;&#125;</code></pre><p>With the above case, our local demo, the result can be seen as follows:</p><p>! <a href="https://static001.geekbang.org/infoq/71/712b250c12cb54311de0704bca22f7f7.png"></a></p><h3 id="3-Reading-a-String"><a href="#3-Reading-a-String" class="headerlink" title="3. Reading a String"></a>3. Reading a String</h3><p>The StringReader class can be used to convert a string to a stream of characters, for example:</p><p>&#x2F;&#x2F;3. Read String<br>    public static void testReadStr() throws IOException {<br>        String str &#x3D; “Hello, World!!!” ;<br>        StringReader stringReader &#x3D; new StringReader(str) ;<br>        int data; while ((data &#x3D; stringReader))<br>        while ((data &#x3D; stringReader.read()) ! &#x3D; -1) {<br>            System.out.print((char) data); }<br>        }<br>        stringReader.close();<br>    }</p><pre><code>public static void main(String\[\] args) throws IOException &#123;    testReadStr(); &#125;&#125;</code></pre><p>With the above case, our local demo, the result can be seen as follows:</p><p>! <a href="https://static001.geekbang.org/infoq/74/7468a1631d08ec9fcdb135dedca298f5.png"></a></p><p>Through the introduction and demonstration of the above three common Java application scenarios using the Reader class, you can easily read various types of character stream data by using the subclasses of the Reader class. If you have more cases that are relevant to your life or work, please feel free to share them with us in the comment section, it’s better to have fun alone than with others.</p><h2 id="Pros-and-Cons"><a href="#Pros-and-Cons" class="headerlink" title="Pros and Cons"></a>Pros and Cons</h2><h3 id="Pros"><a href="#Pros" class="headerlink" title="Pros"></a>Pros</h3><ol><li>The <code>Reader</code> class supports character stream reading and can accurately read data from text files.</li><li>The <code>Reader</code> class handles character encoding automatically, converting encoding methods when reading files. 3.</li><li><code>Reader</code> class can be realized through the subclasses of different functions , the use of flexible .</li></ol><h3 id="Disadvantages"><a href="#Disadvantages" class="headerlink" title="Disadvantages"></a>Disadvantages</h3><ol><li><code>Reader</code> class reads data slowly, not suitable for reading binary data. 2.</li><li><code>Reader</code> class can’t access the data in the file randomly, it can only read line by line, and it is less efficient when reading large files. 3.</li><li><code>Reader</code> class is more cumbersome to use, you need to buffer and other ways to improve reading speed and efficiency.</li></ol><h2 id="Class-Code-Methods"><a href="#Class-Code-Methods" class="headerlink" title="Class Code Methods"></a>Class Code Methods</h2><h3 id="Constructor"><a href="#Constructor" class="headerlink" title="Constructor"></a>Constructor</h3><p>protected Reader()</p><p>The default constructor method for the Reader class.</p><h3 id="Methods"><a href="#Methods" class="headerlink" title="Methods"></a>Methods</h3><p>public int read() throws IOException</p><p>Usage: Read a single character and return the ASCII code of the character, if it reaches the end of the stream, return -1.</p><p>public int read(char[] cbuf) throws IOException</p><p>Usage: read an array of characters, return the number of characters read.</p><p>public int read(char[] cbuf, int offset, int length) throws IOException</p><p>Usage: read the specified length of the character array, return to read the number of characters.</p><p>public long skip(long n) throws IOException</p><p>Usage: skip n characters (including spaces), return the number of characters actually skipped.</p><p>public boolean ready() throws IOException</p><p>Usage: determine whether the characters can be read from the stream, if you can read return true.</p><p>public boolean markSupported()</p><p>Usage: determine whether the stream supports mark() operation. If support, then return true, otherwise return false.</p><p>public void mark(int readAheadLimit) throws IOException</p><p>Usage: Set the mark position and point the pointer in the input stream to the mark position. If the stream does not support mark() operation, then throws IOException.</p><p>public void reset() throws IOException</p><p>Purpose: Redirects the pointer in the input stream to the mark position. If the stream does not support the reset() operation, then an IOException is thrown.</p><p>abstract public void close() throws IOException</p><p>Purpose: Closes the stream and releases all resources associated with it.</p><h2 id="Test-case"><a href="#Test-case" class="headerlink" title="Test case"></a>Test case</h2><p>The following is a test case for reading a file using the Reader class:</p><h3 id="Test-code-demo"><a href="#Test-code-demo" class="headerlink" title="Test code demo"></a>Test code demo</h3><p>package com.example.javase.io.reader;</p><p>import java.io.<br>import java.io.FileReader; import java.io.<br>import java.io.File; import java.io.FileReader; import java.io.<br>import java.io.Reader; import java.io.</p><p>&#x2F;**Reader; import java.io.FileReader; import java.io.<br> * @author bugs<br> * @version 1.0<br> * @date 2023&#x2F;10&#x2F;19 10:34<br> *&#x2F;<br>public class ReaderTest {</p><pre><code>public static void main(String\[\] args) throws IOException &#123;    File file = new File(&quot;./template/fileTest.txt&quot;);    Reader reader = new FileReader(file);    char\[\] buffer = new char\[1024\];    int len;    while ((len = reader.read(buffer)) != -1) &#123;        System.out.println(new String(buffer, 0, len));    &#125;    reader.close();&#125;</code></pre><p>}</p><p>According to the above test case, let’s execute the main function to test reading the character data of the file, the result is shown in the following screenshot:</p><p>The result is shown in the following screenshot. <a href="https://static001.geekbang.org/infoq/f8/f8fb40ac8922c1b3472ed61c75d91467.png"></a></p><p>By comparing the output of the console with the original text, we can see that the test case utilizes the Reader class to read the file normally.</p><h3 id="Code-Analysis"><a href="#Code-Analysis" class="headerlink" title="Code Analysis"></a>Code Analysis</h3><p>The above test code uses the Reader class to read character data from a file. The following is a step-by-step analysis of the code to help students accelerate their understanding.</p><p>First, we create a File object, specify the path of the file to be read, and then use the <code>FileReader</code> class to read the file into memory and return the <code>Reader</code> object. Then use <code>char[]</code> array as buffer, read the data from <code>Reader</code> to the buffer, and use <code>String</code> class to convert the buffer data into a string and output it to the console, until all the data have been read. Finally, close the Reader object to release the related resources. The whole reading process is very simple, did you learn it?</p><h2 id="Full-Summary"><a href="#Full-Summary" class="headerlink" title="Full Summary"></a>Full Summary</h2><p>This article provides a detailed introduction to the <code>Reader</code> class in Java, including its introduction, source code analysis, application scenarios, strengths and weaknesses of the analysis, methodology and test cases. Through the study of this article, we can better grasp the use of <code>Reader</code>, and in the development of the <code>Reader</code> class reasonable use.</p><h2 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h2><p>The <code>Reader</code> class is an abstract class for reading character streams in Java. It has the advantages of reading text data, automatically handling character encoding, etc., and can realize different functions through its subclasses. However, the Reader class reads data slowly, is not suitable for reading binary data, and cannot randomly access data in a file. When using the <code>Reader</code> class, pay attention to the use of buffers and other ways to improve reading speed and efficiency. Finally, be careful to close resources to avoid resource leakage problems.</p>]]></content>
    
    
    <summary type="html">Reader in Java in detail</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>Seller&#39;s and Buyer&#39;s Shows of Database Performance</title>
    <link href="https://www.nablepart.com/bb4a4d3931d9/"/>
    <id>https://www.nablepart.com/bb4a4d3931d9/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>Article source: WeChat public number “All brothers within the four seas”</p><p>Author: Xue Xiaogang, Oracle ACE&#x2F;PG ACE partner&#x2F; TiDB MVA, Evangelist &#x2F; OCP Lecturer &#x2F; ITPUB Core Expert &#x2F; InkTen Wheel MVP &#x2F; Oracle Certified &#x2F; MySQL Certified &#x2F; PG Certified &#x2F; Oceanbase Certified &#x2F; Damon Database Certified &#x2F; TiDB Certified</p></blockquote><p>Last week, I met a friend from Huawei who mentioned a lot of performance metrics when it comes to databases. First of all, I do not doubt his indicators, I think nine times out of ten is true. Now as long as the data, basically reliable. Because if outrageous data, peers will question. But I said a little bit is that in the actual environment, usually users can not even reach this one percent. It’s not that the product doesn’t work, it’s that the users don’t work.</p><p>We database practitioners debate database performance to see which metrics, centralized or distributed. (Yesterday, I read an article about the comparison of distributed and centralized TPS and QPS, etc.) As for whether to look at TPS, QPS or RT, it is difficult to convince the other side for a while. And I just said that in the actual environment in fact, no matter which indicators to look at, are not reached. This is because the official data are ideal data in the laboratory, while the actual environment is very harsh. The status quo in our country is that application developers do not understand the database, and I do what the requirements say. In fact, the logic of the requirements is all wrong, and he doesn’t care. Coupled with the database does not have a good design, SQL thousands of tens of thousands of lines, and even a SQL hundreds of MB. These poisonous beatings from the user’s actual use, resulting in the database what performance is being held down on the ground friction.</p><p>Once upon a time, POC may be a few hundred pieces of data, now we have been fooled more, have a heart. There may be measured when the data volume of 10 million level or even hundreds of millions of data volume. Once saw a few domestic database vendors complained that the user test their products require 10 million tables do not build indexes to test performance. Vendors feel aggrieved. But in fact this is our domestic status quo. Because most of the business scenarios that use Oracle, there is no index. This is the status quo in China. To put it bluntly, only the function, performance is not his assessment indicators. As long as the functional logic on the line. However, in my handling of some of the problems of various enterprises found. Sometimes the functional logic is not right.</p><p>So the user knows what their status quo is, very little control over the quality of development. Then the A database on the writing can resist, then change the B database do not expect application development to change the program. The same messy writing depends on whether it can be supported. Can not support it will not be considered.</p><p>This is what I mean by the database performance of the seller’s show (think good designers, development is a high level), while in fact is only the vendor can not think of, no developers can not do. Descartes product understood? Needs to experience a bit of a reality hit.</p><p>We see TPCC data amazing, that is not seen our real environment of table design, requirements and SQL. with this situation to set TPCC, not to mention the hit list. If it is not down or not down, it is still a matter of opinion.</p><p>A friend from Huawei said that he wanted me to provide a real user scenario to see how their products can be prevented. I saw a different Huawei product attitude. They are not the ones who say they are far ahead. I appreciate their attitude. I have nothing else, there are too many blood and tears lessons in this area, and these are not common in Internet companies, which is related to the company’s genes.</p><p>Based on the above conclusions, the performance problems caused by scribbling are also problems on distributed databases such as OB and TiDB, and this has nothing to do with standalone or distributed. Dili hot bar wearing a good-looking clothes, a 200 pounds to wear, not to mention good-looking, clothes will not OOM be held open are two say, this and a separate top or dress has nothing to do. The main thing is that the status quo is that everyone’s figure management is not in place.</p><p>It is possible that only by taking it seriously and managing it properly can we reduce and avoid these kinds of problems, for example, drunk driving is almost unheard of nowadays. When it comes to such times, talk about how to test database performance and distributed and centralized performance.</p>]]></content>
    
    
    <summary type="html">Seller&#39;s and Buyer&#39;s Shows of Database Performance</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>From climbing to running - why do we need unit tests?</title>
    <link href="https://www.nablepart.com/6a5df719cbbe/"/>
    <id>https://www.nablepart.com/6a5df719cbbe/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Cha Ba Dao is a local tea beverage chain brand in Chengdu, Sichuan Province, founded in 2008. After 15 years of development, Cha Ba Dao has become a benchmark brand of food and beverage, with more than 7,000 stores in 31 provinces and cities across the country, realizing the full coverage of all provinces and line-level cities in mainland China. on March 31, 2021, in the Chengdu-Chongqing Food and Beverage Summit, Cha Ba Dao was awarded the “2021 Chengdu-Chongqing Food and Beverage Benchmarking Brand Award”. In August 2021, Cha Baidao was selected as one of the “Top 15 New Tea Drink Brands in China in the First Half of 2021” by iiMedia Ranking.2023 On June 9, Cha Baidao, a new tea drink brand, secured a new round of financing led by Soroptimist Asia and followed by a number of renowned investment institutions, increasing its valuation to 180%, which is the highest in China. The valuation soared to 18 billion yuan.</p><p>In April this year, Cha Baidao held a brand upgrade conference at its headquarters in Chengdu, announcing that the number of stores had exceeded 7,000. According to the China Chain Store Management Association, by December 31, 2020, 2021 and 2022, the number of Tea Budao stores will be 2,240, 5,070 and 6,532, respectively, and the epidemic has not slowed down the pace of its expansion.</p><p>**With the rapid expansion of its business scale, Cha Ba Dao accelerated its digital transformation strategy. **But as some of Tea Baidao’s early business systems were provided by external SaaS service providers, they were unable to meet the requirements of large scale, high concurrency, elasticity of expansion, agility and observability brought about by the rapid growth of online business. In order to meet the needs of online and offline store customers and business growth, for store services, POS, user transactions, platform docking, store management, food and beverage production and other core chain services, **TeaBaido chose to comprehensively self-research combined with the native capabilities of Aliyun Cloud to promote containerization, microservicing, and observable capabilities for a comprehensive upgrade. **</p><h3 id="Business-value-of-cloud-nativeization"><a href="#Business-value-of-cloud-nativeization" class="headerlink" title="Business value of cloud nativeization"></a>Business value of cloud nativeization</h3><p>The tea beverage industry faces the pressure of market competition and the need to improve internal operational efficiency. To meet these challenges, AliCloud and TeaBaido work together to complete the transformation of Cloud Native to the cloud and start a new journey of digitization.</p><p>The use of container and microservice technology realizes the lightweight and high portability of applications. It allows enterprises to deploy and expand applications more flexibly and respond quickly to market demand, enabling them to realize high availability and elastic scalability of applications, and maintain stable business operation regardless of whether they face sudden peak access or system failures. The introduction of continuous delivery and continuous integration of development methods helps enterprises realize rapid iteration and deployment. By automating the process, enterprises are able to roll out new features and products faster, keeping pace with the market and grabbing a head start.</p><p>The cloud-native transformation to the cloud not only brings higher security, availability and scalability, but also improves an organization’s ability to innovate and be competitive.</p><h3 id="Observable-challenges-brought-by-cloud-native"><a href="#Observable-challenges-brought-by-cloud-native" class="headerlink" title="Observable challenges brought by cloud native"></a>Observable challenges brought by cloud native</h3><p>As an emerging restaurant brand with rapid business development, TeaBaido has a huge number of online orders every day, which is backed up by a close integration with Internet technology, with the help of extremely high digital construction to support TeaBaido’s huge sales volume. Therefore, there are very strict requirements for the continuity and availability of the business system to ensure the stable operation of the core services of the transaction chain. Especially during daily peak ordering hours, marketing activities, and unexpected hot events, in order to provide users with a smooth experience, each link of the entire microservice system needs to ensure the quality of service under high concurrency and high traffic.</p><p>A perfect all-link observable platform and APM (Application Performance Management) tools are the prerequisites for guaranteeing business continuity and availability. In the construction of observable technology system, TeaBudo technical team has experienced more exploration. Before the full realization of containerization, TeaBaido accessed the open-source APM tool on some microservice systems and carried out more than a year of validation, but in the end, it was not able to be promoted to the entire microservice architecture, mainly due to these reasons:</p><ul><li><strong>The balance between metrics data accuracy and sampling rate is difficult to trade-off</strong>.</li></ul><p>Appropriate sampling strategy is an important means to solve the cost and performance of link tracking tools. If the APM tool is fixed to use 100% link full collection, it will bring a large number of duplicate link information to be saved. Under the huge microservice system scale of Chabaidao, 100% link collection will cause the observable platform storage cost to exceed expectations, and it will also have a certain impact on the performance of the microservice application itself during the peak business period. However, open-source tools can affect the accuracy of the indicator data in the case of setting sampling strategies, so that important observable indicators such as error rate, P99 response time and other important observable indicators will lose the value of observation and alerts.</p><ul><li>**Lack of higher-order alerting capabilities</li></ul><p>Open-source tools are relatively simple to implement in terms of alerts, and users need to build their own alert processing and alert distribution platforms in order to realize basic functions such as sending alert information to IM groups. Due to the many service modules and complex dependencies after the microservicing of TeaBaidu, it is common for a component to be abnormal or unavailable. Often, the abnormality or unavailability of a component leads to a large number of redundant alarms in the whole chain, forming an alarm storm. The result is that the operation and maintenance team is tired of dealing with a variety and huge number of alarm messages, and it is very easy to miss the important messages that are really used for troubleshooting.</p><ul><li><strong>There is no single means of troubleshooting</strong>.</li></ul><p>Open source APM tools are mainly based on Trace link information to help users realize fault location, for simple microservice system performance problems, users can quickly find the performance bottleneck or fault source. However, in the actual production environment, there is no way to solve many difficult problems through simple link analysis, such as N+1 problems, memory OOM, CPU occupancy is too high, the thread pool is full and so on. This puts high demands on the technical team, which needs engineers with in-depth understanding of the underlying technical details and rich SRE experience to quickly and accurately locate the root cause of the failure.</p><h3 id="Accessing-AliCloud’s-application-real-time-monitoring-service-ARMS"><a href="#Accessing-AliCloud’s-application-real-time-monitoring-service-ARMS" class="headerlink" title="Accessing AliCloud’s application real-time monitoring service ARMS"></a>Accessing AliCloud’s application real-time monitoring service ARMS</h3><p>In the process of comprehensive cloud biochemistry of TeaBaido’s system architecture, TeaBaido’s technical team and Aliyun’s engineers discussed in depth the better way of landing on the ground for full-link observability.</p><p>As an important member of AliCloud’s cloud-native observable product family, ARMS application monitoring provides thread profiling, intelligent insights, CPU &amp; memory diagnostics, alarm integration, and other capabilities not available in open source APM products. At the suggestion of AliCloud, TeaBaidu technical team tried to connect a business module to ARMS application monitoring.</p><p>Since ARMS provides automatic application access in container service ACK environment, it only needs to add 2 lines of code to the YAML file of each application to automatically inject probes and complete the whole access process. After a period of trial, the practical value provided by ARMS application monitoring has been continuously explored by Chabaidao’s engineers. TeaBaido also uses AliCloud’s performance testing product, PTS, to realize the capacity planning of daily routine and big promotion. Because of the introduction of ARMS and PTS, TeaBaido’s daily operation and maintenance and stability assurance system has undergone many upgrades.</p><h3 id="Build-emergency-response-system-around-ARMS-alert-platform"><a href="#Build-emergency-response-system-around-ARMS-alert-platform" class="headerlink" title="Build emergency response system around ARMS alert platform"></a>Build emergency response system around ARMS alert platform</h3><p>Since the problem of alarm storms was often encountered when building an alarm platform based on open-source products, Cha Budao is very cautious about the configuration of alarm rules, and converges the alarm targets to the most serious business failures as much as possible. Although this can avoid the frequent harassment of the SRE team by the alarm storms, a lot of valuable information, such as a sudden increase in the response time of the interfaces, can be ignored.</p><p>In fact, for the problem of alarm storms, the industry has a set of standard solutions, which involves de-emphasis, compression, noise reduction, silence and other key technologies, but these technologies and observable product integration on the existence of a certain degree of complexity, a lot of open-source products do not provide a perfect solution in this area.</p><p>These key technologies in the field of alarms, in the ARMS alarm platform have a complete function. Take event compression as an example, ARMS provides two types of compression: label-based compression and time-based compression. Multiple events that meet the conditions will be automatically compressed into a single alert for notification (as shown in the figure below).</p><p>! <a href="https://pic2.zhimg.com/80/v2-f8f820cb1a1e2e4cb9e31892a88154ed_720w.webp"></a></p><p>Figure: Tag-based compression</p><p>! <a href="https://pic3.zhimg.com/80/v2-63455c4e67cec8ccd99354982ec160ae_720w.webp"></a></p><p>Figure: Time-based compression</p><p>With the various technical means provided by ARMS alert platform, the problem of alert storm can be solved very effectively. Therefore, the technical team of TeaBaiDao began to pay attention to the use of alerts, and gradually enriched more alert rules, covering different levels such as application interfaces, host metrics, JVM parameters, database access, and so on.</p><p>Docking through the enterprise WeChat group, so that the alarm notification to realize the interaction of the ISTM process, when the duty officer receives the alarm notification, you can directly through the IM tool for alarm closure, event escalation and other capabilities, to quickly realize the alarm processing. (as shown in the following figure)</p><p>! <a href="https://pic2.zhimg.com/80/v2-868ec5e58d249070b4136ded69da9f51_720w.webp"></a></p><p>Figure: Intelligent convergence and notification of monitoring alarm events</p><p>The flexible and open alarm event disposal strategy meets the needs of different timeframes and scenarios. TeaBaidu started to build an enterprise-level emergency response system based on this with reference to Alibaba’s best practices for security production. The emergency scenarios from the business perspective are used as the core model for event emergency response, and the corresponding fault handling process is identified and flowed through different alarm levels. These are the experiences that TeaBaidu has worked out after the full-scale cloud-native biochemistry, and significantly improve the quality of service in the production environment.</p><h3 id="Introducing-Sampling-Strategy"><a href="#Introducing-Sampling-Strategy" class="headerlink" title="Introducing Sampling Strategy"></a>Introducing Sampling Strategy</h3><p>Extracting metrics data from link information is a necessary function of all APM tools. Unlike the simple and crude way of extracting metrics in open source products, ARMS application monitoring uses end-side pre-aggregation to capture every real request, first aggregating, then sampling, and then reporting to provide accurate metrics monitoring. It ensures that the metrics data remains consistent with the real situation even if the sampling policy is enabled.</p><p>! <a href="https://pic1.zhimg.com/80/v2-1dcc3e8146be2d84b8fb7d7e6494f9bc_720w.webp"></a></p><p>Figure: ARMS end-side pre-aggregation capability</p><p>In order to reduce the application performance loss caused by APM tools, TeaBaido adopts a 10% sampling rate for most applications and an adaptive sampling strategy for applications with very high TPS to further reduce the application performance loss during peak hours. **Through real-world testing, during peak business hours, the application performance loss caused by ARMS application monitoring is more than 30% lower than that of open-source products, and the accuracy of metrics data can be trusted, **such as the average response time at the interface level, the number of errors, and other metrics can meet the needs of production-grade business.</p><p>The application performance loss caused by application monitoring is more than 30% lower than that of open source products. <a href="https://pic4.zhimg.com/80/v2-ba2517b5cb0ae39b775735ae29c347fb_720w.webp"></a></p><p>Figure: Interface level metrics data</p><h3 id="Asynchronous-links-are-automatically-buried"><a href="#Asynchronous-links-are-automatically-buried" class="headerlink" title="Asynchronous links are automatically buried"></a>Asynchronous links are automatically buried</h3><p>Asynchronous thread pooling technology exists in the Java space, as well as numerous open-source asynchronous frameworks, such as RxJava, Reactor Netty, Vert.x, and others. Compared to synchronous links, asynchronous links are more technically difficult to automatically bury points and context pass-through. Open source products have incomplete coverage of mainstream asynchronous frameworks, and there is the problem of burial failure in specific scenarios. Once such a problem occurs, the most important link analysis capability of the APM tool will be difficult to play a role.</p><p>In this case, developers need to manually bury points through the SDK to ensure the context transmission of asynchronous links. This creates a huge amount of workload and is difficult to be quickly rolled out in a large scale within a team.</p><p>ARMS supports all major asynchronous frameworks, enabling asynchronous link context transmission without any business code intrusion. Even if some asynchronous frameworks are not supported in a specific version in a timely manner, as long as the user side puts forward the requirements, the ARMS team will be able to make up for it in the new version of the probes. **After using ARMS application monitoring, the TeaBaidu technical team directly cleaned up the manual buried code of the previous asynchronous frameworks, significantly reducing the maintenance workload. **</p><p>! <a href="https://pic2.zhimg.com/80/v2-7a53aa7ffcc415d88ab3ffdbaf21a219_720w.webp"></a></p><p>Figure: Link context of an asynchronous call</p><h3 id="Utilization-of-higher-order-application-diagnostic-techniques"><a href="#Utilization-of-higher-order-application-diagnostic-techniques" class="headerlink" title="Utilization of higher-order application diagnostic techniques"></a>Utilization of higher-order application diagnostic techniques</h3><p>When the coverage of buried points is high enough, traditional APM tools and link tracing tools can help users quickly determine which link (Span) has a performance bottleneck, but they cannot provide more effective help when they need to investigate the root cause of the problem further.</p><p>For example, when the system CPU utilization rate increases significantly, is it caused by a business method consuming CPU resources like crazy? This is a difficult problem for most APM products to solve. Because it is impossible to know the resource consumption of each link from the link view alone. TeaBaidu engineers have encountered similar problems many times when using open source tools. At that time, they could only make guesses based on experience, and then go to the test environment to make repeated comparisons to solve the problem completely, although they also tried some Profiling tools, but the threshold of using them was relatively high, and the effect was not very good.</p><p>ARMS Application Monitor provides CPU &amp; Memory Diagnostics, which can effectively find bottlenecks in Java programs caused by CPU, memory and I&#x2F;O, and break down the statistics by method name, class name and line number, and ultimately assist developers in optimizing the program, reducing latency, increasing throughput and saving costs. CPU &amp; Memory Diagnostics can be turned on temporarily when a specific problem needs to be troubleshooted, and help users directly find the root cause of the problem through the flame diagram. In a scenario where the CPU of an application in the production environment spiked, Chabadao’s engineers were able to locate that the problem was caused by a specific business algorithm through CPU &amp; Memory Diagnostics.</p><p>! [](<a href="https://pic3.zhimg.com/80/v2-04ca60e7bb473058c97e5d4e">https://pic3.zhimg.com/80/v2-04ca60e7bb473058c97e5d4e</a></p>]]></content>
    
    
    <summary type="html">From climbing to running - why do we need unit tests?</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>The first alpha version of Dubbo-js is here with direct browser access to Dubbo, gRPC backend microservices!</title>
    <link href="https://www.nablepart.com/99926bfef7ff/"/>
    <id>https://www.nablepart.com/99926bfef7ff/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><em>Author: Bonnie Tsai</em>.</p><p>Based on the Triple protocol defined by Dubbo3, you can easily write browser- and gRPC-compatible RPC services that run on both HTTP&#x2F;1 and HTTP&#x2F;2. Dubbo TypeScript SDK <strong>[</strong> <strong>1]</strong> supports the use of IDLs or programming-language-specific ways of defining services, and provides a set of lightweight The Dubbo TypeScript SDK <strong>[</strong> <strong>1]</strong> supports defining services using IDL or programming language-specific methods, and provides a lightweight set of APl’s to publish or invoke these services.</p><p>Dubbo-js has released the first alpha version supporting Dubbo3 protocol in September, and its release will have the opportunity to revolutionize the architecture and communication model of front-end and back-end of microservices, so that you can access the back-end of Dubbo, gRPC services directly in the browser page or web server. The project is currently under rapid development, developers who are interested in participating in the apache&#x2F;dubbo-js project are welcome to search for <strong>Pinned Group: 29775027779</strong> to join the developer group.</p><h2 id="Browser-Web-Application-Example"><a href="#Browser-Web-Application-Example" class="headerlink" title="Browser Web Application Example"></a>Browser Web Application Example</h2><p>This example demonstrates how to use dubbo-js to develop a web application that runs on a browser. The web page will call the back-end services developed by dubbo node.js and generate the page content. **This example demonstrates both IDL and non-IDL based coding models. ** This example demonstrates both IDL-based and non-IDL-based coding patterns.</p><h3 id="IDL-mode"><a href="#IDL-mode" class="headerlink" title="IDL mode"></a>IDL mode</h3><h4 id="Pre-requisites"><a href="#Pre-requisites" class="headerlink" title="Pre-requisites"></a>Pre-requisites</h4><p>First, we’ll use Vite to generate our front-end project template, which has all the built-in feature support we’ll need later.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">npm create vite@latest -- dubbo-web-example --template react-ts</span><br><span class="line">cd dubbo-web-example</span><br><span class="line">npm install</span><br></pre></td></tr></table></figure><p>Because we are using Protocol Buffer, we first need to install the relevant code generation tools, which include @bufbuild&#x2F;protoc-gen-es, @bufbuild&#x2F;protobuf, @apachedubbo&#x2F;protoc-gen-apache-dubbo-es, @ apachedubbo&#x2F;dubbo.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm install @bufbuild/protoc-gen-es @bufbuild/protobuf @apachedubbo/protoc-gen-apache-dubbo-es @apachedubbo/dubbo</span><br></pre></td></tr></table></figure><h4 id="Defining-Services-with-Proto"><a href="#Defining-Services-with-Proto" class="headerlink" title="Defining Services with Proto"></a>Defining Services with Proto</h4><p>Now define a Dubbo service using Protocol Buffer (IDL).</p><p>Create the util&#x2F;proto directory under src and generate the files.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mkdir -p src/util/proto &amp;&amp; touch src/util/proto/example.proto</span><br></pre></td></tr></table></figure><p>Write the contents:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">syntax = &quot;proto3&quot;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">package apache.dubbo.demo.example.v1;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">message SayRequest &#123;</span><br><span class="line">  string sentence = 1; &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">message SayResponse &#123;</span><br><span class="line">  string sentence = 1; &#125; message SayResponse &#123; string sentence = 1; &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">service ExampleService &#123;</span><br><span class="line">  rpc Say(SayRequest) returns (SayResponse) &#123;&#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>This file declares a service called ExampleService, defines the Say method for this service and its request parameter SayRequest and return value SayResponse.</p><h4 id="Generating-Code"><a href="#Generating-Code" class="headerlink" title="Generating Code"></a>Generating Code</h4><p>Create the gen directory as a destination for the generated files.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mkdir -p src/util/gen</span><br></pre></td></tr></table></figure><p>Run the following command to generate code files in the gen directory using the protoc-gen-es, protoc-gen-apache-dubbo-es plugins:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">PATH=$PATH:$(pwd)/node_modules/.bin \</span><br><span class="line">  protoc -I src/util/proto \</span><br><span class="line">  --es_out src/util/gen \</span><br><span class="line">  --es_opt target=ts \</span><br><span class="line">  --apache-dubbo-es_out src/util/gen \</span><br><span class="line">  --apache-dubbo-es_opt target=ts \</span><br><span class="line">  example.proto</span><br><span class="line">\ \ \ \</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">After running the command, you should see the following generated files in the target directory.</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">```auto</span><br><span class="line">├── src</span><br><span class="line">│ ├── util</span><br><span class="line">│ │ ├── gen</span><br><span class="line">│ │ │ ├── example_dubbo.ts</span><br><span class="line">│ │ └── example_pb.ts</span><br><span class="line">│ └── proto</span><br><span class="line">│ │ └── example.proto</span><br></pre></td></tr></table></figure><h4 id="Creating-an-App"><a href="#Creating-an-App" class="headerlink" title="Creating an App"></a>Creating an App</h4><p>Need to download @apachedubbo&#x2F;dubbo-web first.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm install @apachedubbo/dubbo-web</span><br></pre></td></tr></table></figure><p>Now we can import the service from the package and setup a client. Add the following to App.tsx:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br></pre></td><td class="code"><pre><span class="line">import &#123; useState &#125; from &quot;react&quot;.</span><br><span class="line">import &quot;. /App.css&quot;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">import &#123; createPromiseClient &#125; from &quot;@apachedubbo/dubbo&quot;;</span><br><span class="line">import &#123; createDubboTransport &#125; from &quot;@apachedubbo/dubbo-web&quot;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">// Import service definition that you want to connect to.</span><br><span class="line">import &#123; ExampleService &#125; from &quot;. /util/gen/example_dubbo&quot;; // Import service definition that you want to connect to.</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">// The transport defines what type of endpoint we&#x27;re hitting.</span><br><span class="line">// In our example we&#x27;ll be communicating with a Dubbo endpoint.</span><br><span class="line">const transport = createDubboTransport(&#123;</span><br><span class="line">  baseUrl: &quot;http://localhost:8080&quot;, &#123;</span><br><span class="line">&#125;);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">// Here we make the client itself, combining the service // definition with the transport.</span><br><span class="line">// definition with the transport.</span><br><span class="line">const client = createPromiseClient(ExampleService, transport, &#123; serviceGroup: &#x27;dubbo&#x27;, serviceVersion: &#x27;1.0.0&#x27; &#125;);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">function App() &#123;</span><br><span class="line">  const [inputValue, setInputValue] = useState(&quot;&quot;);</span><br><span class="line">  const [messages, setMessages] = useState&lt;</span><br><span class="line">    &#123;</span><br><span class="line">      fromMe: boolean;</span><br><span class="line">      message: string;</span><br><span class="line">    &#125;[]</span><br><span class="line">  &gt;([]);</span><br><span class="line">  return (</span><br><span class="line">    &lt;&gt;</span><br><span class="line">      &lt;ol&gt;</span><br><span class="line">        &#123;messages.map((msg, index) =&gt; (</span><br><span class="line">          &lt;li key=&#123;index&#125;&gt;&#123;`$&#123;msg.fromMe ? &quot;ME:&quot; : &quot;Dubbo Server:&quot;&#125; $&#123;msg.message&#125;`&#125;&lt;/li&gt;</span><br><span class="line">        ))&#125;</span><br><span class="line">      &lt;/ol&gt;</span><br><span class="line">      &lt;form</span><br><span class="line">        onSubmit=&#123;async (e) =&gt; &#123;</span><br><span class="line">          e.preventDefault();</span><br><span class="line">          // Clear inputValue since the user has submitted.</span><br><span class="line">          setInputValue(&quot;&quot;);</span><br><span class="line">          // Store the inputValue in the chain of messages and</span><br><span class="line">          // mark this message as coming from &quot;me&quot;</span><br><span class="line">          setMessages((prev) =&gt; [</span><br><span class="line">            .. .prev,</span><br><span class="line">            &#123;</span><br><span class="line">              fromMe: true,</span><br><span class="line">              message: inputValue,</span><br><span class="line">            &#125;,</span><br><span class="line">          ]);</span><br><span class="line">          const response = await client.say(&#123;</span><br><span class="line">            sentence: inputValue,</span><br><span class="line">          &#125;);</span><br><span class="line">          setMessages((prev) =&gt; [</span><br><span class="line">            .. .prev,</span><br><span class="line">            &#123;</span><br><span class="line">              fromMe: false,</span><br><span class="line">              message: response.sentence,</span><br><span class="line">            &#125;,</span><br><span class="line">          ]);</span><br><span class="line">        &#125;&#125;</span><br><span class="line">      &gt;</span><br><span class="line">        &lt;input value=&#123;inputValue&#125; onChange=&#123;(e) =&gt; setInputValue(e.target.value)&#125; /&gt;</span><br><span class="line">        &lt;button type=&quot;submit&quot;&gt;Send&lt;/button&gt;</span><br><span class="line">      &lt;/form&gt;</span><br><span class="line">    &lt;/&gt;</span><br><span class="line">  );</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">export default App;</span><br></pre></td></tr></table></figure><p>执行以下命令，即可得到样例页面。</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm run dev</span><br></pre></td></tr></table></figure><h4 id="启动-Server"><a href="#启动-Server" class="headerlink" title="启动 Server"></a>启动 Server</h4><p>接下来我们需要启动 Server，可以使用 Java、Go、Node.js 等 Dubbo 支持的任一语言开发 Server。这里我们采用 Dubbo 服务嵌入的 Node.js 服务器，具体可参考 Node.js 开发 Dubbo 后端服务 <strong>[</strong> <strong>2]</strong> 中的操作步骤。</p><p>不过需要注意，我们额外需要修改 Node.js 示例：引入 @fastify&#x2F;cors 来解决前端请求的跨域问题</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm install @fastify/cors</span><br></pre></td></tr></table></figure><p>This needs to be changed in the server.ts file.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">...</span><br><span class="line">import cors from &quot;@fastify/cors&quot;.</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">...</span><br><span class="line">async function main() &#123;</span><br><span class="line">  const server = fastify(); ...</span><br><span class="line">  ...</span><br><span class="line">  await server.register(cors, &#123;</span><br><span class="line">    origin: true, &#125;); ... await server.register(cors, &#123;</span><br><span class="line">  &#125;); ... await server.register(cors, &#123; origin: true, &#125;); ...</span><br><span class="line">  ...</span><br><span class="line">  await server.listen(&#123; host: &quot;localhost&quot;, port: 8080 &#125;); ...</span><br><span class="line">  ...</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">void main().</span><br></pre></td></tr></table></figure><p>Finally, run the code to start the service.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npx tsx server.ts</span><br></pre></td></tr></table></figure><h3 id="IDL-less-mode"><a href="#IDL-less-mode" class="headerlink" title="IDL-less mode"></a>IDL-less mode</h3><p>In upcoming releases, we will continue to provide support for IDL-less mode communication, which will make it easier to access IDL-less back-end services. Let’s take a quick look at how IDL-less mode works.</p><p>Again, you need to install @apachedubbo&#x2F;dubbo, @apachedubbo&#x2F;dubbo-web first.</p><pre><code class="auto">npm install @apachedubbo/dubbo @apa</code></pre>]]></content>
    
    
    <summary type="html">The first alpha version of Dubbo-js is here with direct browser access to Dubbo, gRPC backend microservices!</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>There&#39;s no myth about the big players when it comes to streaming computation.</title>
    <link href="https://www.nablepart.com/d032915b07e2/"/>
    <id>https://www.nablepart.com/d032915b07e2/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Jitterbug and Today’s Headlines, the two most popular products under ByteDance, are also the facade of ByteDance. Behind this, there are many technical teams in support, and streaming computing is one of them.</p><p>However, even in ByteDance, there is no myth to engage in streaming computing. Only a group of young people, spent six years, one step at a time, from the beginning of the “do not know the technology does not know the business”, and finally carried the byte internal streaming computing platform and the construction of application scenarios, supporting the machine learning platform, recommendation, number of warehouses, search, advertising, streaming media, security and wind control, and other core business. In 2022, the team completed the cloud biochemical transformation of the Flink computing engine and formally provided cloud capabilities to the public through the volcano engine.</p><p>This is not a heroic story that saves the day, there are no ups and downs, and there are no dazzling flowers and applause. Instead, it’s a record of a small group of millions of ordinary developers, passively accepting growth in business while actively seeking breakthroughs in open source.</p><h3 id="01-Code-to-be-written-and-business-to-be-pulled"><a href="#01-Code-to-be-written-and-business-to-be-pulled" class="headerlink" title="01 Code to be written and business to be pulled*"></a>01 <strong>Code to be written and business to be pulled</strong>*</h3><p>In 2019, with the outbreak of Jitterbug, ByteDance stood at the beginning of high speed growth, and live broadcasting, short video, advertising and other businesses all rode the wave. These businesses, all of which need streaming computing to support them.</p><p>** Zhang Guanghui, the person in charge of the byte streaming computing team, is facing many thorny problems. **</p><p>First put the timeline forward two years, when Zhang Guanghui just joined the byte jump, the calculation engine with Apache Storm - born in 2011, Twitter developed the first generation of stream processing system, only support some low level API.</p><p>“All Storm tasks are submitted in scripts on the development machine, and the operation and maintenance platform is in a very primitive state. If the Storm cluster fails, the jobs can’t be automatically recovered, and you can’t even find all the stock jobs.” Zhang Guanghui remembers this vividly.</p><p>That said, no one is too proud of anyone. At that time, Zhang Guanghui’s resume, there is no streaming computing product experience, but some “relatives” - involved in streaming computing upstream and downstream product development, such as data collection, message queuing.</p><p>The good thing is that byte’s business scenario is single, mainly focusing on machine learning scenarios, Zhang Guanghui and his team switched the streaming computing engine from Apache Storm to Apache Flink. the so-called team, in fact, even including him, there are only two people. Then in 2018, he worked with the data flow team to complete the construction of the streaming computing platform, including the monitoring and alarming of tasks, log collection, anomaly diagnosis and other tool systems.</p><p>In 2019, the business scenarios to be supported by streaming computing have been quite rich, expanding to real-time number of warehouses, security and wind control, and so on, and are still increasing. The demand for individual scenarios has also become more complex: the recommendation business is getting bigger and bigger, with a single job exceeding 50,000 Cores; the real-time warehouse business scenario requires SQL for development and has higher requirements for data accuracy.</p><p>However, due to the serious shortage of manpower in the team, the work progress was very slow. “There are only two people on the team, and Oncall takes turns to be on call. When they weren’t on duty, they were often solving problems left over from the previous week’s Oncall.” Zhang Guanghui described it this way.</p><p>Zhang had to expand his staff while working with the data integration team to build the SQL platform. It was at this time that Benchao Li joined the Streaming Compute team and, shortly after, became the technical lead for Flink’s SQL direction.</p><p>**However, using ** <strong>SQL</strong> ** to develop **** streaming computing **** tasks ****, Benchao Li didn’t have much experience: “At the beginning, I didn’t understand the technology, nor the business.” **</p><p>Prior to this, he served in a small and medium-sized enterprises, the scope of work involves a wide range of streaming computing can only be counted as one of the directions. After joining Byte, Li Benchao realized that the scale of Byte’s streaming computing far exceeded his imagination. Before you can only see a concurrent task, but in the byte, the concurrency of a task can be tens of thousands, only a single task to use the computing resources than the previous company all the tasks are added up.</p><p>But Li Benchao can not help but understand. Five days a week at work, three of them, Zhang Guanghui first thing in the morning caught him and asked, with which business chat, can create a few new SQL tasks.</p><p>** indicators every day in the head spinning, Li Benchao had to give the team “pull business”. **The use of words is the same as in the street to stop passers-by to sell products, but the location changed to bytes in Beijing’s various work zones.</p><p>“Hey, we can develop this streaming computing via SQL, are you interested? Do you want to know more about it?” Li Benchao has nothing to do but contact the heads of e-commerce, live broadcasting, advertising, games, education and other business departments. As long as people nodded, Li Benchao immediately took the shuttle bus and went to the work area for on-site exchanges without saying a word.</p><p>Zhang Guanghui commented, “At that time, it was really ‘doing everything’.”</p><p>With the SQL platform, the development and maintenance efficiency soared. “Originally, it took one or two days for one person to develop a task. And now, one person can directly handle ten tasks in one day. In addition, the way of communication between the business side and us is much simpler, and we can understand the code written by the other side, which makes it easy to optimize.”</p><p>In addition, Byte has done a lot of work on Flink’s stability, such as supporting the blacklist mechanism, single point of failure recovery, Gang scheduling, speculative execution, and other features. Since the business has higher requirements for data accuracy, the team supports the operation of the Checkpoint mechanism to ensure that the data is not lost, and has been widely promoted and implemented in Byte.</p><p>In this process, Li Benchao also found that Flink may not be as powerful and easy to use as he thought, for example, it is not compatible with a random change of SQL status. In order to address these problems that have not yet been solved by the community, Byte has also carried out a lot of internal optimization solutions to explore.</p><p>! <a href="https://oscimg.oschina.net/oscnet/up-e8d825d23950c5e2d58dc9a101db0d82ab1.png"></a></p><p>*<em>ByteHopping</em> <em>Flink</em> <em>SQL</em> *TaskConcentration</p><h3 id="02-Flink-turns-out-to-be-more-than-streaming-computing"><a href="#02-Flink-turns-out-to-be-more-than-streaming-computing" class="headerlink" title="02 Flink **** turns out to be more than streaming computing"></a>02 <strong>Flink **** turns out to be more than streaming computing</strong></h3><p>After ByteDance chose Flink as its streaming computing processing engine, tens of thousands of Flink jobs run on its internal cluster every day, with peak traffic as high as 10 billion data items per second. The scale of individual jobs is also very large, each compute node uses about 30,000 concurrency, and the whole job uses more than 300 physical machines. the stability and performance optimization of the Flink cluster, as well as the optimization of the deployment, execution, and failure of a single very large job, face problems that are hard to find a second in the whole industry.</p><p>Since Flink is a streaming and batching computing engine, ByteDance has actively promoted Flink’s streaming and batching implementation, and has launched more than 20,000 Flink batch jobs, in which it has solved a lot of stability and performance problems, such as Hive syntax compatibility, slow nodes, and speculative execution.</p><p>At the same time, Byte Jump has launched the ByteHTAP project internally. Combined with Byte’s internal OLTP system, Byte has been able to support analytical calculations with low data latency (sub-second) and high data consistency requirements, but there is still a lack of a compute engine to support OLAP calculations. As Byte has done a lot of in-depth optimization in Flink, it is finally used as the OLAP engine of ByteHTAP.</p><p>! <a href="https://oscimg.oschina.net/oscnet/up-0bdaccfa52987511e6780144f12d4450ee4.png"></a></p><p><strong>However, **** as ByteHTAP</strong> <strong>began to provide online</strong> <strong>OLAP</strong> **services to the business side, a new problem arose. **Not only did the business require latency for a single concurrent query, but they also wanted the team to provide an OLAP service that could support high concurrency.</p><p>At the beginning of 2021, Fang Yong joined ByteDance as a streaming computing architect. In order to support the online business, Fang Yong and his team had to make up for this capability as soon as possible.</p><p>“The whole development process was very torturous and stressful.” Fang Yong said, “ByteHTAP has already provided online services, we need to iterate quickly to make Flink support higher concurrent queries.”</p><p>Every time the team had a weekly meeting, Fang Yong would keep an eye on the QPS metrics. It took nearly half a year to “finally optimize QPS from single-digit to dozens and dozens, until a single cluster on the line supports hundreds of QPS.”</p><p>In the last two years, Byte is contributing many of the Flink OLAP optimizations back to the community. Flink OLAP content has also been added to the Apache Flink 2.0 Roadmap.</p><p>A complete data production chain is divided into three computing scenarios, namely streaming, batch and OLAP computing. In the real-time warehouse scenario, Storm or Flink is needed to support streaming computation; in the batch scenario, Hive or Spark is relied on, and when the computation semantics are different, the two sets of engines will lead to inconsistencies between streaming and batch results. Moreover, after streaming and batching data, it needs to be imported into the warehouse or stored offline, and then a new OLAP engine has to be introduced to probe and analyze the data, which can’t guarantee the correctness and consistency even more.</p><p>Moreover, optimization and maintenance is also quite troublesome. Three systems mean that three teams have to be built to maintain them separately. Once there is a need to optimize or solve bugs, it is necessary to raise issues to the three communities for discussion.</p><p>The Flink community proposed Streaming Warehouse to solve this problem. Bytes investigated the current development direction of streaming computing and the Streaming Warehouse system, and built the Streaming Warehouse system based on Flink and Paimon, which unified the computation and storage of streaming and batch, and increased job and data lineage management, data consistency management, and data management. We added core functions such as job and data lineage management, data consistency management, streaming data revision and backtracking, etc. to solve the problems of streaming computing accuracy and data operation and maintenance.</p><p>! <a href="https://oscimg.oschina.net/oscnet/up-fca6618852b7122974aca81d2377233a202.png"></a></p><p>**In the end, “three engines, three teams” became “one engine, one team”. **In Fang Yong’s words, using Flink as a unified streaming, batch and OLAP computing engine for the entire data production chain, there is no need to worry about the real-time data and the complexity of business analysis.</p><p>As for the future of Flink, Fang Yong already has a vision. He hopes to gather the R&amp;D capabilities of the community to improve the entire Flink computing ecosystem, and turn Flink into a Streaming Warehouse system that unifies streaming, batch and OLAP.</p><h3 id="03-New-Business-New-Scenarios-New-Challenges"><a href="#03-New-Business-New-Scenarios-New-Challenges" class="headerlink" title="03 New Business, New Scenarios, New Challenges"></a>03 New Business, New Scenarios, New Challenges</h3><p>In 2022, “Streaming Flink Edition”, a commercialized computing engine developed by Byte’s Streaming Computing team, will be launched on the Volcano Engine, officially providing computing power on the cloud to the outside world instead of only serving Byte’s internal business.</p><p>In Byte, this product is called “Serverless Flink”, which relies on ByteDance’s largest real-time computing cluster practice in the industry, and is based on the Volcano Engine Container Service (VKE&#x2F;VCI) to provide Serverless extreme elasticity, which is a new generation of out-of-the-box cloud primitives. Out-of-the-box, next-generation cloud-native, fully managed real-time computing platform.</p><p>**In fact, it may not be appropriate to call ** <strong>Serverless</strong> <strong>Flink</strong> **a **** newly launched product ****. **Li Benchao explained that the so-called “streaming computing flink version” is actually the team in six years, so that Apache Flink in the byte internal realization of large-scale application, and the accumulation of a large number of product experience and technical capabilities “packaging” a little bit, rather than the accumulated product experience and technical capabilities. “Instead of making a new product, it is based on a derivative of Apache Flink.</p><p>It is derived from Apache Flink, can be understood as an enhanced version of Apache Flink, and is 100% compatible with Apache Flink, including many features:</p><ul><li><p>development efficiency. Streaming computing Flink version supports operator-level Debug output, Queryable State, Temporal Table Function DDL, which significantly improves the development efficiency of the open source version of Flink.</p></li><li><p>Reliability improvement. Streaming computing Flink version of checkpointing for a single task, improving the success rate of checkpointing under high concurrency. Single-point task recovery and node blacklisting mechanism ensure fast response to faulty nodes and avoid overall business restart.</p></li><li><p>Serverless cloud-native architecture. Extreme elasticity, 1‰ core fine scheduling.</p></li><li><p>Ease of use enhancement. Minimal SQL development, out-of-the-box, O&amp;M-free, supports full lifecycle management of streaming data.</p></li><li><p>High performance at a low price. Cost-effective, SLA-guaranteed, ultra-low TCO.</p></li></ul><p>High performance and low price. <a href="https://oscimg.oschina.net/oscnet/up-94e4c01fd23fdb7e7327a975d5f2575b6dd.png"></a></p><p><strong>Streaming Computing</strong> <strong>Flink</strong> Version Architecture Diagram*</p><p><strong>After Serverless</strong> <strong>Flink</strong> ** went live with the Volcano engine, Fang Yong realized that external customer needs were very different from internal business needs. **For example, some customers are still using relatively early streaming technology stacks such as Storm and Samza. Therefore, the team not only needs to provide technical training and support to customers, but also to help the</p>]]></content>
    
    
    <summary type="html">There&#39;s no myth about the big players when it comes to streaming computation.</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>When BACnet meets IoT, you&#39;ll experience a different kind of building</title>
    <link href="https://www.nablepart.com/1ddaeb980476/"/>
    <id>https://www.nablepart.com/1ddaeb980476/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>In the 14th Five-Year Plan, “new infrastructure” is undoubtedly a key area of concern. I believe we are all familiar with the term “new infrastructure”. Compared with traditional infrastructure, the biggest difference between new infrastructure is the embodiment of the characteristics of the digital economy, which contains several features such as digitalization, networking and intelligence. There is no doubt that the new infrastructure will become a powerful support for the construction of new smart city facilities. Smart venues, smart buildings as the most important part of the smart city, in today’s view is no longer some strange words. In fact, almost all large commercial complexes, office parks are equipped with comprehensive building automation, security, office and other integrated systems, of which the building automation system (BAS) is the most important and indispensable part of the smart building.</p><p>! <a href="https://static001.geekbang.org/infoq/47/47ed7a921947030258393aed656bcf77.jpeg"></a></p><p>The use of building automation system can realize unified monitoring and management of all utility mechanical and electrical equipment of the whole building. The system can seamlessly interface with various building facility subsystems, including central air-conditioning system, water supply and drainage system, power supply and distribution system, lighting system, etc., to further implement automated control management and optimization of the equipment. For example, the system can monitor hidden dangers that cannot be detected by human beings in time, avoiding major accident losses; automate indoor thermostat&#x2F;humidity control and humanized lighting control to enhance the office experience; and optimize the automated control and management of the equipment to reduce the equipment failure rate and operation and maintenance costs. Based on the comprehensive optimization of multiple use scenarios, it aims to create a safe, efficient, comfortable and convenient space environment.</p><h2 id="BACnet-Introduction"><a href="#BACnet-Introduction" class="headerlink" title="BACnet Introduction"></a>BACnet Introduction</h2><p>Next, let’s talk about some of the technology behind the building automation system. As the “control core” of intelligent buildings, BAS faces a variety of subsystems and devices such as lighting, cooling, heating, etc. For the same category of devices, there are also differences in different manufacturers, models, and interfaces, which makes the complexity and realization cost of BAS system very high. In order to reduce the complexity, the industry generally introduced a number of building automation protocol standards, of which the BACnet protocol is undoubtedly the highest degree of attention and acceptance, the following we introduce the basic situation of the protocol.</p><p>BACnet known as A Data Communication Protocol for Building Automation and Control Network (Building Automation and Control Data Communication Protocol), is organized by the American Society of Refrigeration, Heating and Air Conditioning Engineers in June 1995 to develop a building automation network communication protocol, the standard will be different manufacturers of equipment into a single building automation network. The standard will be different manufacturers of equipment to form a consistent self-control system, designed to solve the different manufacturers of equipment between the interoperability (Interoperability) of the demand. BACnet protocol contains equipment data communication and command and control two parts, and based on these two parts of the design of the relevant communication standards.</p><p>! <a href="https://static001.geekbang.org/infoq/8d/8daa07c59c8369963eb5275b60fbdc12.png"></a></p><h4 id="BACnet-Protocol-Layers"><a href="#BACnet-Protocol-Layers" class="headerlink" title="BACnet Protocol Layers"></a>BACnet Protocol Layers</h4><p>BACnet is a simplified 4-layer network protocol structure, including physical layer, data link layer, network layer, and application layer, as follows:</p><p>! <a href="https://static001.geekbang.org/infoq/29/293da6b71d6d765fb47ed52cd4b92b43.png"></a></p><p>Figure - BACnet 4-layer protocol structure</p><p>[ Explanation</p><ul><li><p>Physical Layer : Provides the physical connection between devices and the means of transmitting carrier signals.</p></li><li><p>Data Link Layer : Abstracts and converts the physical signals into data frames, which are propagated by means of frames (Frame) or packets (Packet). This layer is responsible for the access and addressing of the communication medium, error correction and flow control functions.</p></li><li><p>Network Layer: Realizes the local network or cross network for routing and transmission of messages, and is responsible for network packet sequence&#x2F;traffic&#x2F;error-checking capabilities.</p></li><li><p>Application Layer: defines the communication semantics of the BACnet protocol, including answer&#x2F;non-answer packets, and communication of BACnet standard objects&#x2F;services. The application layer is the most important part of the protocol standard design, and is also the most concerned part of BACnet application development.</p></li></ul><p>The BACnet protocol unifies the application layer and network layer parts, and provides seven combination schemes in the physical and data link layer parts. Among them, two LAN networking based on BACnet IP&#x2F;Ethernet and BACnet MSTP&#x2F;RS485 are the most widely used in building automation scenarios. BACnet IP allows communication across subnets&#x2F;area control systems and takes advantage of fiber optics and Gigabit Ethernet to achieve IP addressing of devices.</p><h4 id="BACnet-Network-Topology"><a href="#BACnet-Network-Topology" class="headerlink" title="BACnet Network Topology"></a>BACnet Network Topology</h4><p>In the BACnet network layer definition, a network is a localized network consisting of one or more network segments interconnected by repeaters or bridges with a single local address space; in a BACnet network, the network layer implements global to local address translation and addressing.</p><p>The following is a typical BACnet network topology:</p><p>! <a href="https://static001.geekbang.org/infoq/ba/ba4e820c088ad1a1dc3f012407637682.png"></a></p><p>Related concepts</p><ul><li><p>Physical Segment: A section of physical media that directly connects some BACnet devices.</p></li><li><p>Segment: A network segment formed by connecting multiple physical segments through repeaters at the physical layer.</p></li><li><p>Network: Multiple BACnet segments interconnected by bridges, each BACnet network forming a single MAC address domain.</p></li><li><p>Network: Multiple networks using different LAN technologies are interconnected using BACnet routers to form a BACnet network. In a BACnet network, there is exactly one message path between any two nodes.</p></li></ul><p>BACnet networks have obvious LAN characteristics, and BACnet router nodes can connect BACnet local networks to external networks (e.g., Ethernet, ARCNET). In addition, the BACnet network layer defines a clear packet protocol (NPDU) specification and supports unicast, multicast, and broadcast functions for packets.</p><h4 id="BACnet-Application-Interaction"><a href="#BACnet-Application-Interaction" class="headerlink" title="BACnet Application Interaction"></a>BACnet Application Interaction</h4><p>Since the BACnet protocol utilizes only a simplified four-layer structure, the BACnet Application Layer Protocol needs to consider end-to-end reliable transport in addition to application layer services. An APDU (application layer packet) contains two parts:</p><ul><li><p>Protocol Control Information (PCI), which is a fixed header and contains the APDU type (service request&#x2F;response), message segmentation reorganization information.</p></li><li><p>User data, which is variable and contains information specific to each service request and service response.</p></li></ul><p>The BACnet application layer supports two modes of interaction: “request-response” and “request-no-response”.</p><p>The BACnet application layer supports both “request-response” and “request-no-response” interaction modes. <a href="https://static001.geekbang.org/infoq/62/62ad05c67c6df269d0d04dd42fe664c7.png"></a></p><p>Figure - Request-Answer Mode</p><p>Figure - Request-Answer Mode <a href="https://static001.geekbang.org/infoq/a5/a5818f9161b0eb1514e67a928c49c0e8.png"></a></p><p>Figure - Request-Non-Response Mode</p><h4 id="Objects-and-Services"><a href="#Objects-and-Services" class="headerlink" title="Objects and Services"></a>Objects and Services</h4><p>BACnet draws on object-oriented thinking in order to achieve a language abstraction for communication between network devices.</p><p>To gain further appreciation, we need to understand the following concepts:</p><ul><li><p>Objects, describing analog inputs, outputs, or program modules, etc. BACnet devices contain one or more objects, and devices interoperate with each other by reading&#x2F;modifying the properties of the objects.</p></li><li><p>Properties, which describe the underlying fields of an object, e.g. for a sensor input object, Present_Value is one of its properties.</p></li><li><p>Service, which describes the method by which the object operates, e.g., accessing a property of the object, implementing alarms or notifications based on the object, etc.</p></li></ul><p>In short, an object provides an abstract description of the “network-visible” part of a building automation device, and a service provides commands to access and manipulate this information.</p><p>All BACnet objects need to contain the following common attributes:</p><ol><li><p>an ObjectIdentifier, which uniquely identifies the object in the device. The ObjectIdentifier consists of a total of 32 characters (consisting of the 10-bit ObjectType and the 22-bit InstanceNumber)</p></li><li><p>ObjectName, a BACnet device broadcasts the object name of an object it contains to establish a connection with devices that contain the object in question. 3.</p></li><li><p>object type (ObjectType), different types of objects have a separate set of attributes.</p></li></ol><p>The relationship between objects, services can be described in the following diagram:</p><p>! <a href="https://static001.geekbang.org/infoq/a3/a33c331e6a1ea44d1c581bdf88cfe77a.png"></a></p><p>Figure - BACnet objects, services</p><p>[ Explanation.</p><ul><li><p>The BACnet protocol requires each device to contain a unique “device object”, whose properties can be read to obtain full information about the device.</p></li><li><p>BACnet devices contain multiple analog input&#x2F;output objects whose properties (current values) are used to indicate sensor&#x2F;controller points.</p></li><li><p>BACnet program communicates with the device via BACnet services, e.g. ReadProperty service to read point data.</p></li></ul><h4 id="Built-in-Definitions"><a href="#Built-in-Definitions" class="headerlink" title="Built-in Definitions"></a>Built-in Definitions</h4><p>BACnet has a set of standard objects and services built-in, which continue to be updated in parallel as the protocol evolves. The current number of built-in objects in the protocol is over 49, some common ones are listed below:</p><p>! <a href="https://static001.geekbang.org/infoq/70/7004c6b8b57f3136cc8b1f90f3ea940e.png"></a></p><p>BACnet built-in services are categorized into 6 main categories.</p><ol><li><p>Alarm and Event Services provide the ability to notify internal attributes or state changes. 2.</p></li><li><p>File Access Services (File Access Services) provides methods for reading and writing files. 3.</p></li><li><p>Object Access Services (Object Access Services) provides methods to read, modify, and write attribute values, as well as add and delete objects.</p></li><li><p>Remote Device Management Services (Remote Device Management Services) provides maintenance and fault detection tools for BACnet devices.</p></li><li><p>Virtual Terminal Services (Virtual Terminal Services) provides a character-oriented data bi-directional interaction mechanism. 6.</p></li><li><p>Network Security Services (Network Security Services) provides peer entity authentication, data source authentication, operator authentication and data encryption.</p></li></ol><h2 id="Problems-with-Traditional-BA-Systems"><a href="#Problems-with-Traditional-BA-Systems" class="headerlink" title="Problems with Traditional BA Systems"></a>Problems with Traditional BA Systems</h2><p>Through the previous introduction, we have already had a certain understanding of BACnet and BA system. It is not difficult to find that BACnet is still a gateway protocol mainly for local&#x2F;LAN networking. In the field of BA, most of the devices are static, that is to say, they do not move frequently in space, which essentially fits the characteristics of building architecture. Today, although the application of BACnet has been spread throughout a variety of large-scale building systems, but most of the BA system there are still a lot of problems, which are mainly reflected in the following aspects:</p><p><strong>Systems are complex and not easy to deploy</strong></p><p>First of all, there are many types of BA systems, a BA system can contain many subdivided subsystems and sub-equipment, the subsystems integration and deployment of high difficulty; large buildings have a high degree of spatial design complexity, resulting in the overall hardware and software wiring design and implementation are very complex. It is difficult to realize wireless based on traditional solutions, and cannot be deployed quickly.</p><p>**Operation and maintenance model backward **</p><p>The traditional operation and maintenance model is relatively backward, most still rely on manual inspection, the overall efficiency of the strong dependence on human input and professional skills.</p><p>**Operational Inefficiency, Waste of Energy Consumption</p><p>Most of the traditional BA systems only achieve the “remote control” function of the equipment, lack of data integration and analysis capabilities, and are unable to fully explore the value of the data; it is difficult to realize the analysis and optimization of the energy consumption of the equipment&#x2F;space.</p><p>**Closed system and data silos</p><p>In the process of long-term market competition, traditional BA manufacturers have formed their own technical barriers to self-contained system. The subsystem application protocols are diversified and privatized, and the system data is seriously siloed, further increasing the system complexity and difficulty of use.</p><h2 id="Huawei-Cloud-Facility-aPaas"><a href="#Huawei-Cloud-Facility-aPaas" class="headerlink" title="Huawei Cloud Facility aPaas"></a>Huawei Cloud Facility aPaas</h2><h4 id="Facility-aPaas-Architecture"><a href="#Facility-aPaas-Architecture" class="headerlink" title="Facility aPaas Architecture"></a>Facility aPaas Architecture</h4><p>Huawei Cloud currently launches the Facilities aPaas service, which uses Huawei Cloud IoT device access and the IoT edge cloud engine as a base to construct building</p>]]></content>
    
    
    <summary type="html">When BACnet meets IoT, you&#39;ll experience a different kind of building</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>the designs in trading systems</title>
    <link href="https://www.nablepart.com/09bfbcf84161/"/>
    <id>https://www.nablepart.com/09bfbcf84161/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h3 id="Preface"><a href="#Preface" class="headerlink" title="Preface"></a>Preface</h3><p>Recently, I’ve been reading some books, one of which is “Enterprise Application Architecture Patterns”, and I wanted to write some notes, but the time I wrote them was 03 years ago, which is a long time ago, and the structure of the system may have changed drastically, which is different, and the excerpts feel very old, and they don’t have a great resonance. However, the inner peace when reading at that time is still very fond of. Turning around, I have also studied design principles, patterns, etc. before, but I mainly went to the heart and talked about my feelings. To get inner peace, I feel that it is still necessary to refer to the logic in the book, and connect with some of the understanding in daily work. So, based on these principles and part of the pattern briefly talk about.</p><h4 id="Design-Principles"><a href="#Design-Principles" class="headerlink" title="Design Principles"></a>Design Principles</h4><p><strong>Single Responsibility Principle</strong> <strong>Single Responsibility Principle</strong> **Single Responsibility Principle</p><p><strong>Definition:</strong></p><blockquote><p>The Single Responsibility Principle (SRP: Single responsibility principle), also known as the Single Function Principle, is one of the five fundamental principles of Object Orientation (SOLID). It states that a class should have only one reason for a change to occur.</p></blockquote><p><strong>Case:</strong> Different business activities have different service entrances, whether it is a fulfillment system, or a reverse refund system, there are more business processes bpm. the advantage of doing so is that it can be better divided into scenarios, different scenarios under the interface of the flow limitation, error definitions, process design, regression testing, etc. can be developed independently, and the impact of the surface is more certain. If you use a common service, which routes many sub-services, although it seems to be able to do some common operations, but the mutual constraints and constraints will be a little more, and the facets can also solve some common problems, there is no particular need. <strong>Extension:</strong> It is worth exploring further that although it is easier to form a consensus on the isolation of the entrance layer, further down the line, are processes, process nodes, capabilities, extensions, etc. still allowed to be shared by different scenarios? In practice, it often depends on the degree of difference in capabilities, the complexity of the scenario, and based on the comprehensive consideration of development and maintenance costs, the style of different systems is still not quite the same:</p><ul><li>fulfillment system Down the line will be precipitated by the degree of competence perspective, within a competence need to consider a variety of scenarios, for example: in the fight to extend the point, may come from the confirmation of receipt, may come from the return and fight, but also may come from the deposit forfeiture;</li><li>The extension of the reverse refund system, as far as possible, will be in accordance with the business activities independently, such as the timeout customization of the refund, the buyer to apply for a refund, the seller agrees to refund, the buyer to return the goods, and so on, different business activities have their own extension points.</li></ul><p>Independent, although the impact of the surface will be reduced, but the modification of the time, may be too much independence, and missed some scenes, everything is two-sided. The good thing is that when we make decisions, the scenarios we face are often specific and countable.</p><p>! <a href="https://pic3.zhimg.com/80/v2-5a43d6249d6cd7a920e9f2de642986ee_720w.webp"></a></p><p>Single Duty Principle Understanding</p><p><strong>Principle of opening and closing</strong></p><p><strong>Definition:</strong></p><blockquote><p>The Open-Closed Principle, in the field of object-oriented programming, states that “objects (classes, modules, functions, etc.) in software should be open to extensions but closed to modifications”, meaning that an entity is allowed to change its behavior without changing its source code.</p></blockquote><p><strong>Case:</strong> Whether it is the iteration of the extension framework TMF, or the back of the star ring system proposed, an important purpose is to solve the “business and platform isolation”, which is also an important embodiment of the principle of open and closed. Core logic should be controlled by more familiar platform personnel, should try to generalize, less modification; extended logic should be understood by business developers, should be as flexible as possible, easy to adjust. Look inside the system, in fact, there are many extensions of the domain capabilities, for example: payment can go straight to Alipay, you can also go through the payment system linked to WeChat and other non-Alipay channels. Such an extension, but also the embodiment of the principle of open and closed, just closer to the core process, the impact is greater. To expand the external look, even if the business APP package, product packages and other plug-ins inside, or may serve multiple industries, multiple scenarios, there may be a lot of re-routing and expansion. For example: Amoy system to serve many industries, clothing, home appliances, beauty, etc., different business customization is not quite the same, often with some strategy, chain of responsibility for the extension mode. It can be seen that each level can design its own extension mechanism. <strong>Extension:</strong> For the extension level of Starring Ring, there are several issues that can be further considered. <strong>1. Business isolation mechanism</strong> For the isolated change part, we actually have more expectations: we expect that different businesses and maintainers can still be isolated from each other. Although, inside the system of Starring Ring, the concept of business identity is used to do the isolation, but this is only a technical point of view, the scene conflict as far as possible front to the level of resolution, reducing the pressure of the subsequent implementation, the use of not very flexible. For example: when the circle of products has not been ordered “business identity”, according to the commodity labeling and other identification of the circle of products, will cross multiple “business”. Good in technology, there is a “product package” program, can be superimposed on the business, to achieve logical reuse, but also often appear “missing stack” scenario. But if, without providing the concept of business identity, based on the request scene judgment, then the impact of the surface and expression will be full of uncertainty, but also difficult to solve the conflict “who” and “who” conflict determination. **Boundary between business and platform **We often talk about the base domain, in fact, can be considered as the base + domain, because in addition to the domain capabilities and extensions, such as business processes, commercial capabilities, base implementation, platform share, common packages, development of SDKs, etc., we think that it is the base, and need to be involved in the platform personnel. However, there are often exceptions:</p><ul><li>Inside a domain extension, providing capabilities for a particular business (which has more complete and independent logic) can be done without going to the extension point. There will be, long in the platform of the jar package, but the evolution of the rules is basically business set, the development of the platform needs to intervene in the special case of cooperation;</li><li>In an independently deployed system, the code base is forked out for independent evolution. At this point, the entire hierarchy is defined as “business”;</li><li>Business capabilities are considered platform capabilities and are also integrated into the platform sar package, but things like tax and import business capabilities are basically international business maintenance and hardly platform logic.</li></ul><p>From these examples, the boundary between business and platform has transcended the definition of specific levels, and is no longer so absolute. The core is still following the direction of “one authority, one responsibility”.</p><p>! <a href="https://pic3.zhimg.com/80/v2-3f667a3b4bcc0b51331c1447506c4a0a_720w.webp"></a></p><p>Understanding the principle of opening and closing</p><p>**Richter’s Substitution Principle</p><p><strong>Definition:</strong></p><blockquote><p>Liskov Substitution Principle LSP One of the fundamental principles of object-oriented design. LSP is the cornerstone of inheritance reuse, the base class can be truly reused only if the derived class can replace the base class and the functionality of the software unit is not affected, and the derived class is able to add new behaviors based on the base class.</p></blockquote><p><strong>Case in point:</strong> Replaceable ideas that we often use:</p><ul><li>When doing extension point customization, we don’t care whether it’s a business package or a scenario product package that returns the result, only the result of the customization.</li><li>When making database switches, we don’t care which data service we go to, just the results returned;</li><li>When making an external payment system call, we don’t care whether it goes to Alipay or WeChat, only the result of the payment;</li><li>When making an order query, we don’t care whether it goes to the order repository or an external service (such as evaluation), only the result of the query;</li><li>…….</li></ul><p>The principle of substitutability allows us to program for abstractions; whether the substitution is smooth enough depends on whether our abstractions make sense. <strong>Extension:</strong> While we can abstract and do customizable replacements, it is often in fact difficult to make them senseless:</p><ul><li>Service guarantees may not be the same: for example, pending payment, pending shipment, etc. query of the order library, pending evaluation query of the evaluation interface, the interface may not be consistent with the level of protection and capabilities. Need to do additional stability guarantee.</li><li>The realization of the ability may be inconsistent: after 3 months, the order will enter the history of the library, although the query level can be adapted to achieve the consumer senseless, but the subsequent operation of the consumer will be subject to constraints because of the change, the two sides of the ability is not the same. Some buttons will be downgraded after entering the history library.</li><li>Agreement may be inconsistent: for example, the replacement of the payment system, Alipay because of guaranteed transactions, so the sale of money back more quickly and conveniently, but WeChat Den channels, subject to the back of the funds escrow and strategy, the refund may not be so timely. The two sides of the error code, etc. is also different.</li><li>…….</li></ul><p>! <a href="https://pic1.zhimg.com/80/v2-68877e2e80e707a022f13e90a18582e0_720w.webp"></a></p><p>Richter’s Replacement Principle Understanding</p><p><strong>Dimmitt’s Law</strong>*</p><p><strong>Definition:</strong></p><blockquote><p>Dimitri’s Law can be simply stated as: talk only to your immediate friends. for OOD, it is further interpreted in the following ways: a software entity should interact with as few other entities as possible. Each software unit has minimal knowledge of the other units and is limited to those that are closely related to the unit.</p></blockquote><p>**Case: **The process of executing a business activity is a process of coordinating data manipulation, and eventually everyone agrees to drop a library and send a message. In this process, in order to coordinate the collaboration of various areas, there is a set of basic processes and corresponding nodes composed of coordination layer, the most typical coordinator inside this layer is the context (Request, Result, Context and other concepts). Data is obtained from the context, and if it is to be passed to subsequent nodes, it needs to be stuffed back into the context, called recycling. From a larger perspective, the entry system also acts as a coordinator. For example, the ordering system calls the merchandise system, inventory system, marketing system, funding system, fulfillment system, etc. for data collection and delivery. Systems rarely call each other directly. Such coordinators often do the calling and CONVERT operations, but because of this layer of CONVERT, there is also understanding and control:</p><ul><li>The model can be streamlined to reduce the number of data transfers and multiple CONVERTS in the chain;</li><li>Can control read-only, to avoid subsequent unintended tampering;</li><li>Can save performance, can design lazy loading and other patterns, and then really get it when needed;</li><li>……</li></ul><p><strong>Extension:</strong> The coordinator, because it needs to carry information about the various participants, gets thicker and thicker as there are more and more participants. And because some systems have more layers, they also get tortured by layers of covert, and every time a new piece of data is added, it needs to be added all over again. So, gradually, people began to use a shared model, carrying relatively primitive data. Behind such an ending, another idea is triggered: each participant provides a fixed area to get the raw data, and playing towards the data center, is it not possible to bypass the coordinator layers of transmissions. And the data center should know how to better manage its own data. If you have a hierarchy of systems that just CONVERT, then indeed this will be much easier, but if you have some obscure process logic among your systems that may process this data, then it’s beyond the scope of the data center. In addition, if you have an aggregated root design, where certain parts are one piece of the whole, the consistency of a decentralized data center can be difficult to manipulate. Finally, and more importantly, the coordinator itself is meant to coordinate, so it must be “known”. Whether it is easier to find the context or the decentralized centers is also an important point for the developer, and a decentralized approach requires a certain statute.</p><p>! <a href="https://pic4.zhimg.com/80/v2-7c3b22e0aff1c26d5ce63ae3a3065183_720w.webp"></a></p><p>Dimitri’s Law.</p><p><strong>Principle of Interface Segregation</strong></p><p><strong>Definition:</strong></p><blockquote><p>A client should not rely on interfaces it does not need. Dependencies of one class on another should be based on the smallest interfaces. It is better to use multiple specialized interfaces than a single total interface. A class’s dependency on another class should be built on the smallest interface.</p></blockquote><p><strong>Cases:</strong> Common cases of interface isolation are:</p><ul><li>Isolation by read&#x2F;write capability: one set of interfaces for reading data, another for writing operations.</li><li>Isolation by operation role: one set of interfaces for buyer operation, one set for seller operation, and one set for junior operation.</li><li>Isolated by page type: one set for PC, one set for H5, and one set for client.</li><li>Segregation by component protocol: one set for Ultron, one set for Astore, one set for DTO.</li><li>……..</li></ul><p>When we see these scenarios, we naturally think of isolation, and the code itself is most likely in different modules. The isolation of interfaces is not just about the way they are declared:</p><ul><li>For the client, the dependencies can also be smaller (although there tends to be only one big client per system) and some unnecessary dependencies can be excluded;</li><li>For the server side, it can also be better developed independently, avoiding coupling, for reuse can also abstract SHARE and COMMON.</li></ul><p><strong>Extension:</strong> in the order management system, there is an interface is doOp, defined the operation of the button, through the incoming operation code is not the same, you can carry out the “reminder to ship”, “cancel the order”, “delete the order”, “order”, “order”, “order”, “order”, “order”, “order”, “order”, “order”, “order”, “order” and so on. Delete Order”, “Extend Receipt” and so on. The background for doing so is that the order button may be up to hundreds of buttons, the definition of the interface, not only the server-side things, but also need to apply for wireless packaging interface mtop, the client should be inherited, in order to maximize the reuse to the client’s pathway, provides a more general interface. Here, it can be seen that the principle of interface independence is not absolute, and the number of abstractions to be made, the degree of similarity between all have a relationship. In addition, the above button example is not “absolutely not isolated”, just the entrance layer of reuse, the subsequent is still in accordance with the button code is strictly orthogonal, according to the button code will be routed to a different processing strategy.</p><p>! <a href="https://pic2.zhimg.com/80/v2-c942146f1309c3d7cfeada6210288859_720w.webp"></a></p><p>Interface isolation principle understanding</p><p><strong>Principle of inversion of reliance</strong></p><p><strong>Definition:</strong></p><blockquote><p>The Dependence Inversion Principle (DIP) is for programs to depend on abstract interfaces and not on concrete implementations. Simply put, it requires programming the abstraction and not the implementation, which reduces the coupling between the client and the implementation module.</p></blockquote><p>** Case: ** If you think that the basic services in the short term will not change, there is no more than one set of implementation, often directly according to the call chain in the “upper dependency on the lower” logic to rely on, so it will be very concise and efficient. For example, the order management system inside the order query service, as Repo, as the underlying service, in the domain is a direct call instance. If the service is considered to be external, not subject to their own control, to isolate the changes, to retain the ability to upgrade the interface, then often another layer of interface packaging. Inside the ordering and fulfillment system there is the concept of a gateway gateway. It becomes dependent on the abstract service interface and is not aware of the concrete implementation instances. Add a layer of abstract interfaces for decoupling, will maintain a better loose coupling ability, because the interface is an abstract contract, the two sides can be developed independently, but will also bring the cost of management, which is a judgment and trade-offs. <strong>Extension:</strong> Although conceptually this level is good, there is still some cost to do it right:</p><ul><li>packaged module: suppose in the process of A dependency B, the introduction of abstraction C. Such an abstraction layer, because and A, B has nothing to do with, should be a separate jar package and code library. But often, because of the trouble of creating new libraries, they will be hosted in a submodule of A or B, and need to be typed separately when packaged, which is rather awkward.</li><li>Complex Object Challenge: Abstract oriented interfaces means more CONVERT, which may be relatively easy in a normal system, but in the context of trading complex object design, it will be a painful process again. To add to the misery, the domain objects of a trading system are a logical mapping to the database model, and it is hard to find out how the data is brought out after overlaying these layers.</li></ul><p>Therefore, sometimes, it will be reversed, choose a tightly coupled model, in a complex system, there is often such a feeling: simple, pure, tightly coupled is a dawn, because point and click, you can find the relevant code, rather than point and click ……. Point and click and get lost. Said so, is not singing the opposite, I hope to be able to dialectically look at the problem, combined with specific scenarios, there is a give and take, there will be a loss.</p><p>! <a href="https://pic4.zhimg.com/80/v2-b5327f01830a3161a8205dccd9061b27_720w.webp"></a></p><p>Dependency Inversion Principle</p><h3 id="Design-Patterns"><a href="#Design-Patterns" class="headerlink" title="Design Patterns"></a>Design Patterns</h3><p>Here is a selection of some of the 23 design patterns to give a little introduction.</p><h4 id="Template"><a href="#Template" class="headerlink" title="**Template"></a>**Template</h4><p>The template approach is said to be an abstract decomposition of an execution process, complete with a standard body logic and extensions through skeleton and extension methods.</p><p>! <a href="https://pic3.zhimg.com/80/v2-d7de768b1cac8961e070935009eb5636_720w.webp"></a></p><p>It is more appropriate to make analogies to the design of platforms and extensibility on the transaction chain. The basic template is the entire process of choreography and the corresponding nodes, and the extensible place is a variety of business customization area. This forms a better integration of platform and business.</p><p>! <a href="https://pic1.zhimg.com/80/v2-07a0268c518f643bb20d3f61a324ebbc_720w.webp"></a></p><h4 id="Chain-of-Responsibility"><a href="#Chain-of-Responsibility" class="headerlink" title="Chain of Responsibility"></a><strong>Chain of Responsibility</strong></h4><p>Chain of Responsibility means letting the requests be executed one by one by the processors in the queue until a willing one is found.</p><p>! <a href="https://pic3.zhimg.com/80/v2-1865b12150f07ee5458095c9400ecd0e_720w.webp"></a></p><p>Business Capability Extension, Domain Extension, traverses the implemented plugins and combines them with the recycling rules to perform a timely meltdown when executing the recycling result. This is similar to the logic of the chain of responsibility. Take the example of “whether to skip notification payment” when confirming receipt of payment, TMF execution engine will traverse the implementation of product packages and app packages, and when it finds the first result that returns to true (skip), it will stop the execution, and the whole returns to true.</p><p>! <a href="https://pic2.zhimg.com/80/v2-478e925b168c279d3afbc1058ebdf7b9_720w.webp"></a></p><h4 id="Strategy"><a href="#Strategy" class="headerlink" title="Strategy"></a><strong>Strategy</strong></h4><p>Strategy means that there are different algorithms for accomplishing a thing, which can be switched relevantly.</p><p>! <a href="https://pic4.zhimg.com/80/v2-299d128be44773987b7ddb368205bab7_720w.webp"></a></p><p>In reverse refund, there is a need to support different refund links, some need to be secured transactions, some are margin links, some are microsoft payments, some are card and asset refunds. In order to support multiple outgoing strategies, a policy model is used, which allows you to customize various funding strategies through extension points, while executing a single one or multiple ones.</p><p>! <a href="https://pic3.zhimg.com/80/v2-8db87e1b58ae44a561b3c4df82800e6e_720w.webp"></a></p><h4 id="Observer"><a href="#Observer" class="headerlink" title="Observer"></a><strong>Observer</strong></h4><p>The Observer pattern is said to be the collaborative mechanism by which we accomplish change notification by registering, dropping back such a collaborative design.</p><p>! <a href="https://pic4.zhimg.com/80/v2-2a1adccb0f988e136eeaf334cb91d00f_720w.webp"></a></p><p>The intra-system observer pattern is not seen much in transactions. But there are still many inter-system message-based observer patterns. The more typical ones are reverse 0s refund: the fast consent function of 0s refund is realized by listening to the message created by the refund and making the consent call. Through the asynchronous notification method of the message, it can be better decoupled, and also can utilize the message rerouting mechanism in case of failure to increase the probability of success.</p><p>! <a href="https://pic3.zhimg.com/80/v2-9fe7a10233afa317f078a04a5bf0ad82_720w.webp"></a></p><h4 id="State"><a href="#State" class="headerlink" title="State"></a><strong>State</strong></h4><p>State mode means that in different states, there are different processing behaviors.</p><p>! <a href="https://pic3.zhimg.com/80/v2-242cb672c97846b40465c02456e0222e_720w.webp"></a></p><p>A workflow introduced in a trading system will define the states that a business activity can go through and the operations that can be performed in each state. For example, a normal guaranteed quasi-transaction flow contains the following state nodes: creating an external payment transaction, payment callback, creating a logistic order, shipping the goods, and confirming receipt of the goods. Each node also defines what operations can be performed, for example, in the Create External Payment Transaction node, you can perform payment verification, close the order, modify the price and other operations, but you can not make payments, refunds and other operations, because there is no payment.</p><p>! <a href="https://pic2.zhimg.com/80/v2-4e57e9d1dde80ba7b5a98fbb1d92467d_720w.webp"></a></p><h4 id="Mediator"><a href="#Mediator" class="headerlink" title="Mediator"></a><strong>Mediator</strong></h4><p>When multiple classes are to be coordinated with each other, mediators are often introduced to coordinate and reduce the cost of knowledge for everyone.</p><p>! <a href="https://pic3.zhimg.com/80/v2-cab45a874a3b186dac1530967a660efa_720w.webp"></a></p><p>During the execution of a process in a trading system, there is a large context which coordinates data from various domains. One of the more typical scenarios is that each orchestration node may affect a data update that needs to be stored somewhere and given to the final update node. This role of transferring information often falls to the context as the intermediary. Here is a rough structure of update collaboration in a reverse process.</p><p>! <a href="https://pic3.zhimg.com/80/v2-fa91c36c29121b0cfa1e1f313955d482_720w.webp"></a></p><h4 id="Combination-Composite"><a href="#Combination-Composite" class="headerlink" title="Combination (Composite)"></a><strong>Combination (Composite)</strong></h4><p>Composites can recursively describe a hierarchy of objects through patterns of inheritance, and child nodes.</p><p>! <a href="https://pic2.zhimg.com/80/v2-43ad9ff1628c043e02831f7e0e6ccea5_720w.webp"></a></p><p>A better understood example of the idea of recursion is the splitting of orders in an order placing system, where a number of columns of orders are grouped together, over and over again. Understood logically, it is like recursively going to further refinement.</p><p>! <a href="https://pic4.zhimg.com/80/v2-8bbc268b5681827d255b9919f999011b_720w.webp"></a></p><h4 id="Single-piece-Singleton"><a href="#Single-piece-Singleton" class="headerlink" title="Single piece (Singleton)"></a><strong>Single piece (Singleton)</strong></h4><p>Singleton means to make sure that the object is created only once, as a unique resource, in a multi-threaded situation.</p><p>! <a href="https://pic1.zhimg.com/80/v2-a1d8c7ca77a0658e3812f9bcecb6e348_720w.webp"></a></p><p>In the Order Management System, externally invoked services are named Repo, as a repository. In order to easily access these repositories, they are accessed through the singleton pattern, so that some tool classes can also easily call the service through static methods without injecting beans. such repos are: order service, evaluation service, icon service, timeout service, etc.</p><p>! <a href="https://pic1.zhimg.com/80/v2-d003de3baab7e31ddf459fd5ceceeef8_720w.webp"></a></p><h4 id="Interpreter"><a href="#Interpreter" class="headerlink" title="Interpreter"></a><strong>Interpreter</strong></h4><p>Interpreter is said to form a set of language for a set of contexts that can accomplish corresponding tasks by interpreting the meaning of expressions.</p><p>! <a href="https://pic2.zhimg.com/80/v2-8647e81522dc3f6633a358d6bb1614e5_720w.webp"></a></p><p>Interpreter mode seen in the transaction is mainly, the original Tao system of Newton system, a dynamic script class configuration. This configuration platform mainly addresses some of the dynamic rules in the product package, and through the push mode, the dynamics of interpretation can be utilized to reduce some of the deployment costs.</p><p>! <a href="https://pic4.zhimg.com/80/v2-4357ef67fef9e86029d8f95c9e2b60a3_720w.webp"></a></p><h4 id="Proxy"><a href="#Proxy" class="headerlink" title="Proxy"></a><strong>Proxy</strong></h4><p>Proxy is to package a class to forward the related operation twice or to do some control.</p><p>! <a href="https://pic2.zhimg.com/80/v2-78e7acdded1e601908866c95a563c7e1_720w.webp"></a></p><p>In the order management system, there are certain protection measures for the context in order to avoid the context being tampered with by various domains. When entering the specific execution node, the context will be converted, the conversion process, through the packaging of read-only interface, to proxy entity objects, to provide read-only services, and can not get a specific instance, can not be set to modify.</p><p>! <a href="https://pic1.zhimg.com/80/v2-911b762a30564071365cb56bf1b34814_720w.webp"></a></p><h3 id="Summary"><a href="#Summary" class="headerlink" title="Summary"></a>Summary</h3><p>The Enterprise Application Architecture Patterns has a better written description of patterns:</p><blockquote><p>Each pattern describes a problem that keeps recurring around us, and the core of the solution to that problem. This way, you can use that solution again and again without having to do duplication of effort.</p></blockquote><p>This article draws on some principles and design patterns to talk about some of the designs in trading systems that I have peeked into. Hopefully, it will give you a perspective and a little more insight into the trading chain as I see it.</p>]]></content>
    
    
    <summary type="html">the designs in trading systems</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>It&#39;s 2023, so why aren&#39;t SSRs as popular as expected?</title>
    <link href="https://www.nablepart.com/dee550222620/"/>
    <id>https://www.nablepart.com/dee550222620/</id>
    <published>2023-10-29T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>A study found that every one-second increase in website loading time will result in a 10% loss of users. In order to improve the page’s second open rate, people from all walks of life continue to explore the optimization strategy, only in the browser field under the optimization has been unable to meet the requirements of the ultimate, we began to explore the direction of the server-side, and once let the [Server Side Rendering] this ancient concept of “red”, and hype is hot.</p><p>Server-side rendering abbreviated SSR, full name Server Side Rendering, as the name implies is the work of rendering on the Server side. This approach is not only conducive to the first screen rendering, to improve the first screen response speed of SPA applications, but also to facilitate search engine crawling, which is conducive to SEO optimization. However, by 2023, SSR is not as popular as expected.</p><p>Some commented that most of the reasons for using SSR is to serve SEO, but now search engines have kept pace with the development of SPA written in a framework support is also good, so the need for SSR is not so great. There are also people who think that SSR is a pseudo-requirement, and that business logic and controllers load just as fast when they are separated.</p><p>But there are also comments that there are still a large number of users who can’t have a good experience when accessing web pages because of the network environment or device conditions, so if we want to improve the experience of this part of the user, then SSR is an indispensable way to do so.</p><p>What is the real situation? What are the reasons that have prevented SSR from becoming the dominant development paradigm on the Web? Is this approach outdated in today’s environment? What kind of business scenarios are more suitable for SSR? Open Source China invited two front-end leaders to listen to their views.</p><ul><li><p>Liu Kui, community nickname kuitos. Liu Kui, community nickname kuitos, front-end engineer of Alipay Experience Technology Department, author of the open source micro-front-end program qiankun, is currently responsible for web infrastructure R&amp;D related work in Ant.</p></li><li><p>Liu Yong, community nickname skypig, head of Node.js Infra in a big factory, core developer of EggJS &#x2F; CNPM.</p></li></ul><h2 id="I-SSR-not-a-pseudo-requirement"><a href="#I-SSR-not-a-pseudo-requirement" class="headerlink" title="I. SSR, not a pseudo-requirement"></a>I. SSR, not a pseudo-requirement</h2><p>**Q1: In your experience, what types of projects and scenarios are more commonly used ** <strong>SSR</strong> **? Can you give some examples? **</p><p><strong>Liu Kui:</strong> SSR is more commonly used for such websites that are very sensitive to first screen performance or have strong SEO requirements, such as:</p><ul><li><p>E-commerce platforms: faster first screen rendering allows users to see product information faster, increasing the conversion rate of purchase.</p></li><li><p>Campaign pages: SSR can effectively improve the business results of marketing campaigns.</p></li><li><p>Portals: content-based sites usually have a stronger claim on SEO</p></li></ul><p>**Q2: From your actual experience, what do you think are the advantages of ** <strong>SSR</strong> <strong>compared to ** <strong>CSR</strong></strong> (<strong><strong>Client-side rendering</strong></strong>) mode? **</p><p><strong>Liu Kui:</strong> From my personal experience, the biggest advantage is still in the first screen experience, SSR mode HTML loading process users can see the effective page content, this is basically CSR is difficult to do.</p><p>**Q3: Nowadays **** search engine **** already supports rendering, do you think there is still a need to use ** <strong>SSR</strong> ** because of ** <strong>SEO</strong> **? **</p><p><strong>Liu Kui:</strong> Due to well-known reasons, domestic search engines do not support SPA type of application is not good, if you want your site can be better indexed by the crawler, basically still need to use SSR (or SSR variants) program.</p><p>**Q4: Some people believe that ** <strong>SSR</strong> ** is a pseudo-requirement to improve the first screen rendering performance, if the back-end service business logic and controller separation, the controller is divided into view controller and interface controller, call the same business logic. The first time you open the page, the front-end ** <strong>JavaScript</strong> ** load the data rendered on the page, and then request the interface to get the data when the user interacts. This solution is much better than SSR, which is in a performance hurry. How do you rate it? **</p><p><strong>Liu Kui:</strong> This solution is still CSR in nature and cannot solve the problem native to CSR solutions: that is, the user must wait until the JS download is complete -&gt; initiate an interface request -&gt; JS gets the data and renders the page before they can see the valid content. In the more demanding network environment and user device conditions, this problem will be more obvious.</p><p><strong>Liu Yong:</strong> according to the team’s infrastructure maturity and business scenarios to do technical selection, these 2 programs are not absolutely superior or inferior, nor is it absolutely cut off, they can be combined into a program through the front-end engineering.</p><h2 id="Second-SSR-want-to-red-a-little-difficult"><a href="#Second-SSR-want-to-red-a-little-difficult" class="headerlink" title="Second, SSR, want to red a little difficult"></a>Second, SSR, want to red a little difficult</h2><p>**Q5: In the current situation, **<strong>SSR</strong> **and did not become the mainstream Web development model, you think the obstacles are? **Q5</p><p><strong>Liu Kui:</strong> I think there are mainly these types of reasons:</p><ul><li><p><strong>Technical complexity:</strong> SSR requires server-side rendering and integration with front-end frameworks, which requires more technical knowledge for developers.</p></li><li><p><strong>SSR</strong> <strong>Bringing additional development and maintenance costs:</strong> Relative to CSR, SSR solutions require the front-end to pay extra attention to server-side related development and operation and maintenance, such as how to write higher performance server-side rendering logic, how to deal with potential memory leaks, variable contamination, and other isolation issues, and how to do SSR disaster recovery (fallback to CSR in the event of a SSR failure) etc. All of these require extra resources and time investment from the team.</p></li><li><p><strong>Scenario Match:</strong> A large number of services in China are distributed through small programs and APPs, and there are relatively few products with pure Web technology stacks, which is very different from the scenarios in foreign countries.</p></li></ul><p><strong>Liu Yong:</strong> First of all, SSR requires server resource costs, and in the context of cost reduction and efficiency, it will need to be combined with some infrastructure such as Serverless or edge computing to find a balance. At the same time, since it is the server side, there are certain requirements for operation and maintenance capabilities, and there are certain requirements for the technical accumulation of the front-end team.</p><p>Secondly, if the packaging and maintenance of the framework is not done well, it is very common for business students to write SSRs that are prone to memory leaks. Moreover, the current front-end framework has not been optimized for SSR scenarios, so if the first screen display is fast, but then you have to download the huge Bundle file, so the user interaction time is too slow, it is not worth it.</p><p>Finally, the evolution path problem, such as the ant side, they have been with the offline package of the upstream and downstream infrastructure are done very well, APP side, network side of the brother team to cooperate with the polishing. This model will have some defects, such as offline packets too much business competition, but on the first screen performance, SSR is not necessarily much better than it, then let them switch to SSR there will be no small resistance.</p><p>**Q6: There are comments that ** <strong>SSR</strong> ** is too expensive to develop and maintain, and they are turning to ** <strong>CSR</strong> ** Can CSR achieve the same effect as SSR? Are there any specific operational programs? **</p><p><strong>Liu Yong:</strong> From the key point of the first screen performance, if CSR does not do some optimization, at least 3 serial HTTP requests, the first screen time is certainly not as good as SSR (interoperability time is not necessarily).</p><p>However, there are many corresponding solutions, such as ServiceWorker, offline packages and so on.</p><p><strong>Liu Kui:</strong> From the point of view of first-screen rendering speed alone, CSR can be optimized in the following way if it wants to achieve the similar effect of SSR:</p><ol><li><p><strong>First screen page static resource optimization:</strong> through code cutting &amp; lazy loading and other means, to ensure that the first screen needs JS&#x2F;CSS is a minimized version, and through inlining and other ways to directly hit the HTML, to reduce the first screen rendering needs of network requests;</p></li><li><p>**Caching and **** preloading ****: **Use client-side caching and preloading and other mechanisms to improve the speed of the second visit;</p></li><li><p><strong>Use lighter weight frameworks:</strong> Choose lighter weight front-end frameworks, so as to reduce the JS volume of the first screen and improve the loading speed;</p></li><li><p><strong>Optimize the response speed of key interfaces:</strong> Optimize the response speed of interfaces for key content needed on the first screen to ensure that the front-end can render the page faster.</p></li></ol><p>However, if there are additional SEO requirements, it may be difficult to achieve the same effect with simple CSR.</p><p>**Q7: How much would it cost to switch the original application directly to an ** <strong>SSR</strong> **integrated application? What would be the challenges for the development team? **</p><p><strong>Liu Kui:</strong> The costs and challenges are as follows:</p><ol><li><p>**Application transformation cost: **Most of the applications can not be directly run in the server-side environment, basically need to do a certain degree of transformation, such as eliminating the first screen rendering code in the dependence on the window, location and other browser-specific APIs, to build a JS for server-side runtime and so on.</p></li><li><p><strong>SSR</strong> <strong>Function R&amp;D and O&amp;M Challenges:</strong> Teams with rich front-end and server-side development experience are rare in most companies. As mentioned earlier, SSR brings additional server-side development and O&amp;M challenges, which also need to be considered by the front-end team.</p></li></ol><h2 id="III-Maybe-SSR-CSR-will-be-the-new-direction-in-the-future"><a href="#III-Maybe-SSR-CSR-will-be-the-new-direction-in-the-future" class="headerlink" title="III. Maybe, SSR + CSR will be the new direction in the future?"></a>III. Maybe, SSR + CSR will be the new direction in the future?</h2><p>**Q8: Now some sites use **** first screen server-side **** rendering, that is, for the user to start opening the page using the server-side rendering, which ensures that the rendering speed, and other pages using **** client-side rendering ****, so that the front and back end of the completion of the separation. Do you think this would be a more perfect solution that incorporates the advantages of both? **</p><p><strong>Liu Kui:</strong> Yes, this is also the current best practice in the community, which can well retain the advantages of SSR and SPA applications.</p><p><strong>Liu Yong:</strong> This is actually many years ago there are related practices, such as when Yunlong in the UC Scrat Pagelet is a similar practice, and even at that time to do the subsequent page is also through the server-side local rendering, on-demand update of the front-end page of the stage.</p><p>This approach in the industry has also seen some more recent practice: developers are very natural to write logic, do not care about what separation is not separation of things, in the front-end engineering layer of automatic splitting, SSG + SSR + CSR, some can be built statically directly in the construction stage of the processing, some can be rendered in the server-side service-side, the rest of the non-modest components directly rendered out of the front-end. All of these can be done, provided that the front-end engineering piece of the infrastructure is perfect enough, the R&amp;D model is convergent enough.</p><p>As a final reminder, most SSR practices that I know of generally also block a short-lived CDN in the front, and then do thousands of modifications and subsequent business logic through CSR.</p><p>**Q9: How do you see the future development of ** <strong>SSR</strong> **? Will it be phased out with hardware upgrades, or will it become more and more popular with technology updates? **</p><p><strong>Yong Liu:</strong> Optimization ideas are not obsolete, maybe one day we are familiar with the programming interface of SSR has changed, for example, when SSR was using nunjucks, ejs and other templates, and now it is react, vue. the future will also be a new technology, but it is likely to belong to the SSR of a practice model.</p><p><strong>Liu Kui:</strong> In my experience, most of the time, new technology solutions will try to squeeze more from the hardware to get a better interactive experience, so there will be relatively “low-end” devices at any time, and this should not be solved (laughs).</p><p>In my opinion, the most important landing cost of SSR is still in the R&amp;D and O&amp;M of the server side, which is a big burden for the front-end team of most companies, and then the ROI is not high, leading to difficulties in landing SSR. However, with the development of Serverless, there are many almost “zero operation and maintenance” Serverless programs, which can greatly reduce the front-end team’s operation and maintenance costs. At the same time, from the community trend, in recent years, the popular front-end frameworks are embracing Edge and SSR, such as Next.js, remix-run, Qwik, Astro, Fresh, etc. At the same time, React and other libraries are also embracing Edge and SSR. At the same time, libraries such as React have introduced streaming SSR capabilities for better performance performance. Through the integration and iteration of these framework technologies, not only can significantly reduce the R&amp;D cost of front-end engineers developing SSR applications, but also further improve the performance effect of traditional SSR.</p><p>From the current trend, I think SSR will become more and more popular with the reduction of R&amp;D and O&amp;M costs.</p><p>**Q10: Combined with your project experience, how would you evaluate ** <strong>SSR</strong> ** this model? **</p><p><strong>Liu Yong:</strong> Looking at the historical evolution of the front-end, it is SSR → CSR → SSR, which at a rough glance seems to be driving history backwards, but in reality it is not.</p><p>For example, when the front-end HTML + CSS + JS are all-in-one single file way, because the front-end did not have the ability to compile can only be written together; with the evolution of front-end engineering, the development of the development period into a multi-file way of organizing the construction of the automated processing of the mainstream; and then further appeared similar to the Vue SFC such a single-file way, this is to drive backward? Is this a step backward? No, it’s not, but as the infrastructure improves, the user programming interface can be more intuitive, leaving things like performance and deployment to the tools.</p><p>So I think there are real scenarios for the SSR model, but at this stage, I think there are still a lot of practical performance and engineering issues that need to be resolved in order for it to land better.</p><p><strong>Liu Kui:</strong> Although CSR can get a better first-screen experience, there is an obvious performance ceiling due to the functionality of the user’s device. SSR, on the other hand, can be better utilized with edge computing.</p>]]></content>
    
    
    <summary type="html">It&#39;s 2023, so why aren&#39;t SSRs as popular as expected?</summary>
    
    
    
    
    <category term="IaaS" scheme="https://www.nablepart.com/tags/IaaS/"/>
    
    <category term="cloud" scheme="https://www.nablepart.com/tags/cloud/"/>
    
    <category term="cloud computing" scheme="https://www.nablepart.com/tags/cloud-computing/"/>
    
  </entry>
  
  <entry>
    <title>C++ 11的10大革新特性简析</title>
    <link href="https://www.nablepart.com/6888e7a0e765/"/>
    <id>https://www.nablepart.com/6888e7a0e765/</id>
    <published>2023-10-29T11:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>C++11,自1998年C++初次标准化以来的第二个重要标准,引入了大量重要的改变。与C++98&#x2F;03相比,C++11增加了超过140项新特性和600多项缺陷修复,为系统和库开发带来了革命性的变化。本文旨在探讨C++11中最有价值和最常用的新特性和功能。</p></blockquote><h2 id="auto关键字-简化类型推断"><a href="#auto关键字-简化类型推断" class="headerlink" title="auto关键字:简化类型推断"></a>auto关键字:简化类型推断</h2><p>C++11中最显著的新增功能之一是auto关键字,它充当类型说明符。使用auto声明变量时,必须初始化它。在编译期间,编译器根据auto声明右侧表达式的实际类型来推断变量的实际类型。本质上,auto充当变量的实际类型的占位符,该类型由编译器确定。这允许用auto替换冗长的类型声明。但是,在使用auto时需要注意一些准则:</p><ul><li><p>声明指针类型时,auto和auto*之间没有区别,但引用必须使用auto&amp;。</p></li><li><p>在同一行上使用auto声明多个变量时,它们必须都是同一类型。编译器根据第一个变量推断类型,并将其应用于其余变量。</p></li><li><p>auto不能用作函数参数。</p></li><li><p>auto不能用来声明数组。</p></li></ul><h2 id="decltype关键字-根据表达式指定类型"><a href="#decltype关键字-根据表达式指定类型" class="headerlink" title="decltype关键字:根据表达式指定类型"></a>decltype关键字:根据表达式指定类型</h2><p>与auto相反,decltype关键字允许使用由表达式指定的类型声明变量。而auto充当类型推断的占位符,decltype明确声明基于表达式类型的变量。考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">template</span>&lt;<span class="keyword">class</span> T1, <span class="keyword">class</span> T2&gt;</span></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">func</span><span class="params">(T1 x, T2 y)</span> </span>&#123;</span><br><span class="line">  <span class="keyword">decltype</span>(x * y) ret = x * y;</span><br><span class="line">  cout &lt;&lt; <span class="built_in">typeid</span>(ret).<span class="built_in">name</span>() &lt;&lt; endl; </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在上面的代码中,decltype(x * y)声明变量ret与x * y的结果具有相同的类型。当需要表达式的确切类型进行进一步操作时,此特性尤其有用。</p><h2 id="nullptr关键字-增强的空指针处理"><a href="#nullptr关键字-增强的空指针处理" class="headerlink" title="nullptr关键字:增强的空指针处理"></a>nullptr关键字:增强的空指针处理</h2><p>在C中,NULL是一个在stddef.h头文件中定义的宏,可用于表示空指针。但是,为了确保类型安全性和改进函数重载的支持,C++引入了nullptr关键字。nullptr是一个指针类型,用作NULL的替代品。事实上,nullptr被定义为((void*)0)。使用nullptr的好处可以在涉及函数重载的场景中看到:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="type">void</span> <span class="title">func</span><span class="params">(<span class="type">int</span> x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;void func(int x)&quot;</span> &lt;&lt; endl; </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">func</span><span class="params">(<span class="type">int</span>* x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;void func(int* x)&quot;</span> &lt;&lt; endl;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="built_in">func</span>(<span class="literal">NULL</span>);     <span class="comment">// void func(int x)</span></span><br><span class="line">  <span class="built_in">func</span>(<span class="literal">nullptr</span>);  <span class="comment">// void func(int* x)</span></span><br><span class="line">  cout &lt;&lt; <span class="built_in">typeid</span>(<span class="literal">NULL</span>).<span class="built_in">name</span>() &lt;&lt; endl;      <span class="comment">// int</span></span><br><span class="line">  cout &lt;&lt; <span class="built_in">typeid</span>(<span class="literal">nullptr</span>).<span class="built_in">name</span>() &lt;&lt; endl;   <span class="comment">// std::nullptr</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>通过使用nullptr,编译器可以准确确定要调用的重载函数,消除潜在的歧义。</p><h2 id="explicit关键字-防止隐式类型转换"><a href="#explicit关键字-防止隐式类型转换" class="headerlink" title="explicit关键字:防止隐式类型转换"></a>explicit关键字:防止隐式类型转换</h2><p>explicit关键字主要用于防止自动隐式类型转换,特别是在构造函数中。通过将explicit应用于单参数或多参数构造函数,可以禁止通过隐式类型转换直接构造对象。考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">demo</span> &#123;</span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="function"><span class="keyword">explicit</span> <span class="title">demo</span><span class="params">(<span class="type">int</span> a)</span> : _a(a) &#123;</span> &#125; </span><br><span class="line"><span class="keyword">private</span>:</span><br><span class="line">  <span class="type">int</span> _a;</span><br><span class="line">&#125;;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="comment">// 隐式类型转换,调用单参数构造函数</span></span><br><span class="line">  <span class="comment">// demo d = 10;</span></span><br><span class="line"></span><br><span class="line">  <span class="comment">// 当使用explicit禁止此类转换时,编译将失败</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>explicit关键字提供了编译时检查,以确保仅允许显式类型转换,从而促进更安全、更谨慎的代码。</p><h2 id="final关键字-限制继承和重写"><a href="#final关键字-限制继承和重写" class="headerlink" title="final关键字:限制继承和重写"></a>final关键字:限制继承和重写</h2><p>final关键字在C++11中具有两个目的:限制类继承和防止虚函数的重写。通过将类标记为final,它变为不可继承。类似地,将final应用于基类中的虚函数可防止派生类重写它。考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">A</span> <span class="keyword">final</span> &#123;</span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="function"><span class="keyword">virtual</span> <span class="type">void</span> <span class="title">func</span><span class="params">()</span> </span>&#123;</span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;A::func()&quot;</span> &lt;&lt; endl;</span><br><span class="line">  &#125;</span><br><span class="line">&#125;;</span><br><span class="line"></span><br><span class="line"><span class="comment">// 错误:类B无法继承final类A</span></span><br><span class="line"><span class="comment">// class B : public A &#123;</span></span><br><span class="line"><span class="comment">// public:</span></span><br><span class="line"><span class="comment">//   virtual void func() &#123;</span></span><br><span class="line"><span class="comment">//     cout &lt;&lt; &quot;B::func()&quot; &lt;&lt; endl;  </span></span><br><span class="line"><span class="comment">//   &#125;</span></span><br><span class="line"><span class="comment">// &#125;;</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">class</span> <span class="title class_">B</span> &#123;</span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="function"><span class="keyword">virtual</span> <span class="type">void</span> <span class="title">func</span><span class="params">()</span> <span class="keyword">final</span> </span>&#123;</span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;A::func()&quot;</span> &lt;&lt; endl;</span><br><span class="line">  &#125; </span><br><span class="line">&#125;;</span><br><span class="line"></span><br><span class="line"><span class="keyword">class</span> <span class="title class_">C</span> : <span class="keyword">public</span> B &#123;  </span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="comment">// 错误:无法重写final函数B::func()</span></span><br><span class="line">  <span class="comment">// virtual void func() &#123;</span></span><br><span class="line">  <span class="comment">//   cout &lt;&lt; &quot;B::func()&quot; &lt;&lt; endl;</span></span><br><span class="line">  <span class="comment">// &#125;</span></span><br><span class="line">&#125;;</span><br></pre></td></tr></table></figure><p>final和override关键字充当编译时检查,以强制执行预期行为。如果发生违规,代码将无法编译。</p><h2 id="initializer-list容器-简化初始化"><a href="#initializer-list容器-简化初始化" class="headerlink" title="initializer_list容器:简化初始化"></a>initializer_list容器:简化初始化</h2><p>C++11引入了initializer_list容器,它允许使用花括号{}简化对象的初始化。此功能适用于所有类型,而不像C++98中花括号只能用于数组初始化。编译器在遇到如object &#x3D; {arg1, arg2, arg3}的代码时会自动构造initializer_list容器。C++11容器中的各种构造函数利用了此初始化机制。考虑以下针对vector的示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">point</span> &#123;</span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="built_in">point</span>(<span class="type">int</span> x, <span class="type">int</span> y) : _x(x), _y(y) &#123;</span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;point&quot;</span> &lt;&lt; endl;</span><br><span class="line">  &#125;</span><br><span class="line">  <span class="built_in">point</span>(<span class="type">const</span> point&amp; p) : _x(p._x), _y(p._y) &#123;</span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;point(const point&amp; p)&quot;</span> &lt;&lt; endl; </span><br><span class="line">  &#125;</span><br><span class="line"><span class="keyword">private</span>:</span><br><span class="line">  <span class="type">int</span> _x;</span><br><span class="line">  <span class="type">int</span> _y;  </span><br><span class="line">&#125;;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="function">point <span class="title">p1</span><span class="params">(<span class="number">1</span>, <span class="number">2</span>)</span></span>;                         <span class="comment">// 调用构造函数</span></span><br><span class="line">  point p2 = &#123; <span class="number">3</span>, <span class="number">4</span> &#125;;                    <span class="comment">// 通过多参数构造函数进行隐式类型转换</span></span><br><span class="line">  point p3&#123; <span class="number">5</span>, <span class="number">6</span> &#125;;                       <span class="comment">// 构造+复制构造函数=直接构造</span></span><br><span class="line">  vector&lt;point&gt; v&#123; &#123;<span class="number">1</span>, <span class="number">2</span>&#125;, &#123;<span class="number">3</span>, <span class="number">4</span>&#125;, &#123;<span class="number">5</span>, <span class="number">6</span>&#125; &#125;; <span class="comment">// 构造+列表初始化</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>initializer_list容器允许各种容器进行简洁、一致的对象初始化。</p><h2 id="unordered-map和unordered-set容器-高效的基于散列表的数据结构"><a href="#unordered-map和unordered-set容器-高效的基于散列表的数据结构" class="headerlink" title="unordered_map和unordered_set容器:高效的基于散列表的数据结构"></a>unordered_map和unordered_set容器:高效的基于散列表的数据结构</h2><p>C++11中引入的unordered_map和unordered_set容器提供了高效的基于散列表的数据结构。这些容器是标准模板库(STL)的一部分,为插入、删除和检索等操作提供了常数时间复杂度。通过利用散列函数,unordered_map和unordered_set比它们对应的map和set提供了更快的元素访问。当处理大数据集或需要快速查找的场景时,这些容器特别有用。unordered_map容器存储键值对,而unordered_set容器存储唯一元素。两个容器都提供与有序对应物类似的接口,这使得根据性能要求在两者之间进行转换变得很容易。</p><h2 id="右值引用和移动语义-优化对象构造和赋值"><a href="#右值引用和移动语义-优化对象构造和赋值" class="headerlink" title="右值引用和移动语义:优化对象构造和赋值"></a>右值引用和移动语义:优化对象构造和赋值</h2><p>C++11引入了右值引用和移动语义,以优化对象构造和赋值操作。在C++11之前,在将对象作为函数参数传递时,通常使用左值引用以最小化复制并提高效率。但是,这种方法仅适用于左值。对于需要深拷贝的对象,通过值传递和返回值仍然会产生多个副本。为了解决这个问题,C++11引入了右值引用和移动语义。</p><p>右值引用允许直接将临时对象(也称为右值)绑定到引用。这可实现从临时对象到其他对象的资源高效传输,无论是通过移动构造还是移动赋值。移动构造涉及将右值的资源传输到新对象,而移动赋值涉及将右值的资源传输到现有对象。</p><p>考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">demo</span> &#123;  </span><br><span class="line"><span class="keyword">public</span>:</span><br><span class="line">  <span class="built_in">demo</span>(<span class="type">const</span> demo&amp; d) : _a(d._a), _b(d._b) &#123;</span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;demo(const demo&amp; d),深拷贝&quot;</span> &lt;&lt; endl;</span><br><span class="line">  &#125;</span><br><span class="line">  demo&amp; <span class="keyword">operator</span>=(<span class="type">const</span> demo&amp; d) &#123;</span><br><span class="line">    <span class="comment">/* 资源拷贝 */</span></span><br><span class="line">    cout &lt;&lt; <span class="string">&quot;demo&amp; operator=(const demo&amp; d),深拷贝&quot;</span> &lt;&lt; endl;</span><br><span class="line">    <span class="keyword">return</span> *<span class="keyword">this</span>;</span><br><span class="line">  &#125;</span><br><span class="line"><span class="keyword">private</span>:</span><br><span class="line">  <span class="comment">/* 资源 */</span></span><br><span class="line">&#125;;</span><br><span class="line"></span><br><span class="line"><span class="function">demo <span class="title">getTmpObj</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  demo d;</span><br><span class="line">  <span class="keyword">return</span> d; </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="comment">/* 场景1 */</span></span><br><span class="line">  demo ret_1 = <span class="built_in">getTmpObj</span>();   <span class="comment">// 深拷贝</span></span><br><span class="line"></span><br><span class="line">  <span class="comment">/* 场景2 */</span> </span><br><span class="line">  demo ret_2;</span><br><span class="line">  ret_2 = <span class="built_in">getTmpObj</span>();        <span class="comment">// 深拷贝,深拷贝</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在场景1中,当getTmpObj()返回右值时,将执行分配给ret_1的深拷贝。但是,在场景2中,会进行两次深拷贝:一次在分配给ret_2时,另一次在随后的赋值操作期间。对于大对象,这种方式非常低效。</p><p>为了解决这种低效问题,引入了移动语义。通过移动语义,第一个深拷贝可以被资源传输替换,从而显着提高性能。</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">demo</span>(demo&amp;&amp; d) &#123;</span><br><span class="line">  <span class="comment">/* 资源传输 */</span></span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;demo(demo&amp;&amp; d),移动构造&quot;</span> &lt;&lt; endl;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">demo&amp; <span class="keyword">operator</span>=(demo&amp;&amp; d) &#123;</span><br><span class="line">  <span class="comment">/* 资源传输 */</span> </span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;demo&amp; operator=(demo&amp;&amp; d),移动赋值&quot;</span> &lt;&lt; endl;</span><br><span class="line">  <span class="keyword">return</span> *<span class="keyword">this</span>;</span><br><span class="line">&#125;  </span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="comment">/* 场景1 */</span></span><br><span class="line">  demo ret_1 = <span class="built_in">getTmpObj</span>();   <span class="comment">// 移动构造</span></span><br><span class="line"></span><br><span class="line">  <span class="comment">/* 场景2 */</span></span><br><span class="line">  demo ret_2; </span><br><span class="line">  ret_2 = <span class="built_in">getTmpObj</span>();        <span class="comment">// 移动构造,移动赋值</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>通过实现移动构造和移动赋值,不必要的深拷贝被消除,特别是对较大对象,这极大地提高了性能。</p><h2 id="完美转发-保留引用属性"><a href="#完美转发-保留引用属性" class="headerlink" title="完美转发:保留引用属性"></a>完美转发:保留引用属性</h2><p>完美转发是一种保留右值引用的引用属性的技术。它允许按照接收到的原样转发参数,而不会失去其引用特性。完美转发通常用于函数模板中,其中参数的确切类型在转发过程中得以保留。</p><p>考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="type">void</span> <span class="title">Func</span><span class="params">(<span class="type">int</span>&amp; x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;左值引用&quot;</span> &lt;&lt; endl;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">Func</span><span class="params">(<span class="type">const</span> <span class="type">int</span>&amp; x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;常量左值引用&quot;</span> &lt;&lt; endl;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">Func</span><span class="params">(<span class="type">int</span>&amp;&amp; x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;右值引用&quot;</span> &lt;&lt; endl;  </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">Func</span><span class="params">(<span class="type">const</span> <span class="type">int</span>&amp;&amp; x)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;常量右值引用&quot;</span> &lt;&lt; endl;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">template</span>&lt;<span class="keyword">class</span> T&gt;  </span></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">referenceTransmit</span><span class="params">(T&amp;&amp; t)</span> </span>&#123;</span><br><span class="line">  <span class="built_in">Func</span>(t);</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">template</span>&lt;<span class="keyword">class</span> T&gt;</span></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">perfectForward</span><span class="params">(T&amp;&amp; t)</span> </span>&#123;</span><br><span class="line">  <span class="built_in">Func</span>(forward&lt;T&gt;(t)); </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="type">int</span> a = <span class="number">10</span>;</span><br><span class="line">  <span class="type">int</span> b = a;</span><br><span class="line">  <span class="built_in">referenceTransmit</span>(a);           <span class="comment">// &quot;左值引用&quot;</span></span><br><span class="line">  <span class="built_in">referenceTransmit</span>(a + b);       <span class="comment">// &quot;左值引用&quot; </span></span><br><span class="line">  <span class="built_in">perfectForward</span>(a);              <span class="comment">// &quot;左值引用&quot;</span></span><br><span class="line">  <span class="built_in">perfectForward</span>(a + b);          <span class="comment">// &quot;右值引用&quot;</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在上面的代码中,referenceTransmit和perfectForward演示了引用属性的保留。当将参数传递给referenceTransmit或perfectForward时,第一个场景将产生左值引用,而第二个场景将产生右值引用。这种行为允许转发参数而保持其原始引用特性。</p><h2 id="可变参数模板-处理可变参数"><a href="#可变参数模板-处理可变参数" class="headerlink" title="可变参数模板:处理可变参数"></a>可变参数模板:处理可变参数</h2><p>C++11引入了可变参数模板,它允许处理数量和类型可变的参数。可变参数模板支持创建可以接受任意数量参数的函数和类。这种灵活性是通过使用参数包定义模板类或函数实现的,表示为class… Args或typename… Args。然后可以使用Args… args语法展开参数包Args。在参数数量或类型未知或可能变化的场景中,可变参数模板特别有用。</p><p>考虑以下示例:</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="type">void</span> <span class="title">cppPrint</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; <span class="string">&quot;end&quot;</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">template</span>&lt;<span class="keyword">class</span> T, <span class="keyword">class</span>... Args&gt;  </span></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">cppPrint</span><span class="params">(T argu, Args... args)</span> </span>&#123;</span><br><span class="line">  cout &lt;&lt; argu &lt;&lt; <span class="string">&quot; &quot;</span>;</span><br><span class="line">  <span class="built_in">cppPrint</span>(args...);</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="type">void</span> <span class="title">test</span><span class="params">()</span> </span>&#123;</span><br><span class="line">  <span class="built_in">cppPrint</span>(<span class="number">1</span>, <span class="string">&quot;hello&quot;</span>, <span class="number">3</span>, <span class="string">&#x27;a&#x27;</span>); <span class="comment">// 1 hello 3 a end</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>cppPrint函数演示了可变参数模板的强大之处,它可以接受并打印任意数量的参数。通过递归和参数包的展开,会处理每个参数,直到处理完所有参数为止。</p><h2 id="使用列表初始化进行初始化-简化对象初始化"><a href="#使用列表初始化进行初始化-简化对象初始化" class="headerlink" title="使用列表初始化进行初始化:简化对象初始化"></a>使用列表初始化进行初始化:简化对象初始化</h2><p>在C++11中,使用花括号{}的列表初始化被扩展为支持所有类型的初始化,而不仅仅是数组。这一增强提供了一种一致简洁的对象初始化方式。列表初始化利用initializer_list容器,后者在遇到初始化值时会自动构造对象。此特性简化和统一了初始化过程,使代码更可读、更易维护。</p><h2 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h2><p>C++11带来了大量新的特性和功能,极大地增强了该语言。auto关键字简化了类型推断,而decltype允许根据表达式指定类型。nullptr关键字改进了空指针处理,explicit关键字防止了隐式类型转换。final关键字限制了继承和重写,确保了代码完整性。initializer_list容器实现了简化和一致的对象初始化,而unordered_map和unordered_set容器提供了高效的基于散列表的数据结构。右值引用和移动语义优化了对象构造和赋值,而完美转发保留了引用属性。可变参数模板允许处理可变参数,列表初始化简化了对象初始化。</p><p>通过这些新特性和功能,C++11开创了C++编程的新时代,提供了更高的效率、安全性和灵活性。拥抱这些进步可以显着提高开发生产力和代码质量。随着C++的不断发展,开发人员需要与时俱进,利用现代C++特性的强大功能极为重要。</p>]]></content>
    
    
    <summary type="html">这是一篇介绍 C++11 的重要新特性的文章,增加了许多新特性和功能。文章详细介绍了C++11的一些主要新特性</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="C++11" scheme="https://www.nablepart.com/tags/C-11/"/>
    
    <category term="现代C++" scheme="https://www.nablepart.com/tags/%E7%8E%B0%E4%BB%A3C/"/>
    
  </entry>
  
  <entry>
    <title>How do arms dealers market guns in shooters?</title>
    <link href="https://www.nablepart.com/5851c6bb98bd/"/>
    <id>https://www.nablepart.com/5851c6bb98bd/</id>
    <published>2023-10-28T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/30/lPpgLFv12oh6yE4.png" alt="image.png"></p><h2 id="Lifting-a-rock-and-stoning-themselves"><a href="#Lifting-a-rock-and-stoning-themselves" class="headerlink" title="Lifting a rock and stoning themselves"></a>Lifting a rock and stoning themselves</h2><p>At E3 2009, Call of Duty: Modern Warfare 2 (COD6) served up an unmistakably classic episodic campaign demo that is still talked about by FPS veterans.</p><p><img src="https://s2.loli.net/2023/10/30/VLXhSiTa7JFuws9.png" alt="image.png"></p><p>The demo clip, which was pitched by IGN, achieved a whopping 10.76 million views.</p><p>The clip shows the game’s third mission, Cliffhanger, which is the debut of the story’s special forces unit, Task Force 141, made up of “the world’s best fighters”. It’s also the debut of the story’s “141st Special Forces”, a special forces unit made up of “the best soldiers in the world”. Captain MacTavish “Soap” and newcomer “Jon” are the only two men who dare to infiltrate a Russian airbase in the snowy mountains and cause havoc.</p><p>After a scary climb, the duo climbed to the top of the mountain, prepared their weapons and cleared the Russian patrol. “Soap was in charge of long-range cover, so he brought a sniper rifle.</p><p>The assault rifle with a melee scope, silencer and heartbeat detector, painted in snow camouflage, is called an ACR, according to the bottom right corner of the screen.</p><p><img src="https://s2.loli.net/2023/10/30/tXAiNupCazbxwOJ.png" alt="image.png"></p><p>When the game was released and gamers entered the game as “Little Big Shot,” they discovered the ACR’s unique features: low recoil, easy gun control, and a heartbeat scanner that allowed them to detect enemy locations early in a snowstorm with poor visibility.</p><p>The Russians, as the hypothetical enemy, have few guns that are more powerful than the ACR in this weapon configuration. Players who play through the campaign and enter multiplayer matchmaking will find the ACR here to be just as jack of all trades.</p><p>Therefore, the ACR is honored as one of the COD6 multiplayer “Noob (chicken) weapons”, or what we now often call “wheelchair” weapons - how much food players can It’s a great weapon for any player who is a noob.</p><p><img src="https://s2.loli.net/2023/10/30/3RhF1CNjcqrfv9G.png" alt="image.png"></p><p>A thread posted on the GameFAQs forum 13 years ago reads:”I like the ACR, but my friends say it’s a wheelchair.”</p><p>The ACR is a real-life firearm, known as the Adaptive Combat Rifle (ACR), and is manufactured by the venerable American arms company Remington. Call of Duty deliberately adjusted a real firearm to TO level performance, more or less for Remington advertising suspicion.</p><p>Coincidentally, Remington thought so too.</p><p><img src="https://s2.loli.net/2023/10/30/TpMIXlgyDWU8PCV.png" alt="image.png"></p><h2 id="ACR"><a href="#ACR" class="headerlink" title="ACR"></a>ACR</h2><p>According to an Oct. 16 report in the Wall Street Journal, an attorney recently made public a batch of internal Remington documents, including internal emails and company records.</p><p>According to these documents, Remington explicitly signed a deal with Call of Duty publisher Activision Blizzard in 2009 to put one of its rifle products into COD6 as a marketing tool to attract younger customers. And yes, that rifle is the ACR.</p><p><img src="https://s2.loli.net/2023/10/30/C1wg7uPIkyK6sdz.png" alt="image.png"></p><p>The documents show that executives at Remington and its parent company, Freedom Group, were concerned about the “aging” of their customer base and were prepared to put their guns into the shooter as a marketing campaign to attract a new audience, especially a younger one. Freedom Group has a report entitled “Game Strategy”.</p><p>The Freedom Group has an undated memo entitled Gaming Strategy, which states, “With increasing urbanization and reduced access to shooting&#x2F;hunting areas, the primary way young potential shooters are exposed to firearms and ammunition is through virtual gaming scenarios. “</p><p>There’s someone else above Freedom Group - Bronto (Cerberus). Cerberus executives agreed with the idea of using games to advertise firearms, saying it would “help create brand preference among the next generation” and “gain market share with younger consumers.”</p><p>So the executives turned to Call of Duty, the hottest and coolest shooter in town, even though they hadn’t done much research on the real thing.</p><p>Or the Game Strategy memo, which explicitly forbids the use of the company’s brand in games where “non-military bad guys” are likely to be targeted, or COD6, which, when it came out, caused a huge controversy over a level that included a gory massacre of civilians, and which followed the “Rock Climbing Campaign”, which had been released on the market. It was called “No Russian” after “Rock Climbing”.</p><p><img src="https://s2.loli.net/2023/10/30/3BLWuhS5emCO4M8.png" alt="image.png"></p><p><strong>A more impressive mission than “Rock Climbing”</strong></p><p>But the memo also plausibly points out that digital replicas of guns can appear in these games. “Past experience tells us that people will actively seek out firearms branding.” “Reducing direct branding helps us to insulate ourselves from direct recognition while still benefiting from participation in the games.”</p><p>John C. Trull, then Remington’s vice president of arms product management, said in a Wall Street Journal interview that Remington executives hadn’t even investigated the fact that COD6 had a multiplayer mode, much less imagined that their weapons would be used for peer-to-peer shooting between players. “I’m sure if someone knew then how these games as we know them now evolved, the decision-making would have been different.”</p><p><strong>In the end, the Remington ACR joined the Call of Duty arsenal family. As part of the agreement, Remington and Activision agreed to keep the deal strictly confidential and no money would be exchanged, amounting to Remington whoring out advertising and Activision whoring out the rights.</strong></p><h2 id="COD6-sold-around-4-7-million"><a href="#COD6-sold-around-4-7-million" class="headerlink" title="COD6 sold around 4.7 million"></a>COD6 sold around 4.7 million</h2><p>According to Activision Blizzard’s sales figures, COD6 sold around 4.7 million copies in the US and UK within 24 hours of its release, and in August 2011 the then chief executive of Activision Publishing revealed that sales of the game had topped 22 million copies.</p><p><img src="https://s2.loli.net/2023/10/30/Xc6KDQzCMeBEdoW.png" alt="image.png"></p><p><strong>COD6 was unsurprisingly the best-selling game of 2009</strong></p><p>ACR’s strong position in gaming is remembered by a generation of gamers. a realistic selling point for ACR was the high degree of modularity with good accessory compatibility, which is fully reflected in COD6.</p><p>Another point is the low recoil.In 2010, Trull wrote to other executives that a man who worked at his house told him that the ACR had earned a following among the Call of Duty faithful. “What people like about it is its ‘low recoil’ in-game, which allows players to maintain target acquisition.”</p><p><img src="https://s2.loli.net/2023/10/30/Q4l8eRYvU2CnO3J.png" alt="image.png"></p><p>Low recoil equals high hit rate</p><p>Roy Gifford, then vice president of branding and research, responded later that day, “It’s really amazing that games can sell real-world product attributes.”</p><p>By 2011’s Call of Duty: Modern Warfare 3 (COD8), Remington’s partnership with Activision intensified. the ACR was swapped out for a larger caliber version, the “ACR 6.8,” which enhanced lethality with low recoil, and “wheelchair The “Wheelchair” is still firmly in place.</p><p>Two more Remington weapons were added to the game. The first is the MSR bolt-action sniper rifle, which is preferred over the game’s other bolt-action sniper because it has faster bolt pulls and bullet changes. The second is the R11 RSASS semi-automatic sniper rifle, which serves as a continuous sniper with the largest ammo capacity and lowest recoil, and is just as easy to use as the ACR.</p><p>There are no documents that have been disclosed to prove Remington’s continued partnership with Activision. However, in the game, the ACR 6.8 and MSR have the word “Remington” engraved on the side of the gun model; as for the RSASS, a quick check reveals that its full name is “Remington Semi-Automatic Sniper System” - it’s like jumping into the Mississippi. The RSASS is a “Remington Semi-Automatic Sniper System” - a hard-hitting advertisement that you can’t even get out of the Mississippi River.</p><p><img src="https://s2.loli.net/2023/10/30/DPmBVOwjhJlg6Ax.png" alt="image.png"></p><p><img src="https://s2.loli.net/2023/10/30/zw62puDdEG97XxL.png" alt="image.png"></p><p>Call of Duty OL, the now defunct domestic exclusive game copyrighted and distributed by Tencent, used the source code for Modern Warfare and incorporated ACR. In the early days of its operation, when there were no fairy guns to fight with, the ACR was considered a good assault rifle.</p><p>It was 2012, and Remington had yet to see significant growth in its financial results, but executives were beginning to believe that video games were attracting new gun buyers. That year, Trull wrote in an email, “It is truly ironic that ten years ago, video games were considered the number one threat to attracting new gun owners; now they are an attraction to everyone.”</p><p>Before long, however, something even more ironic and infinitely more tragic was about to happen, making executives forever regret the decision to place ads in their games.</p><h2 id="The-December-2012-shooting-at-Sandy-Hook-Elementary-School-in-Connecticut"><a href="#The-December-2012-shooting-at-Sandy-Hook-Elementary-School-in-Connecticut" class="headerlink" title="The December 2012 shooting at Sandy Hook Elementary School in Connecticut"></a>The December 2012 shooting at Sandy Hook Elementary School in Connecticut</h2><p>The December 2012 shooting at Sandy Hook Elementary School in Connecticut was the second worst school shooting in U.S. history, as well as the fourth worst mass shooting.</p><p>The killer first shot his mother in her home, then drove to the school and used a Remington-made rifle to kill 26 people, including 20 children and six educators, in five minutes. When police arrived, the killer used a handgun to kill himself.</p><p><img src="https://s2.loli.net/2023/10/30/jCz1xf5EyhpUHL8.png" alt="image.png"><br><strong>The rifle found at the scene</strong></p><p>The shooting has once again pushed the issue of guns into the center of the American public debate, and the video game industry is lying down.</p><p>Wayne LaPierre, then executive vice president of the National Rifle Association (NRA), gave a speech on the shooting, accusing “gaming companies of being the seedbed of the school shooting nightmare,” calling it “a cold, corrupt, shadow industry,” and calling it “the most corrupt industry in the world. shadow industry” that “sells and incites violence to the people of this country” through games like Bulletstorm, Grand Theft Auto, and Mortal Kombat.</p><p>The shooter, named Adam Lanza, was 20 years old at the time of the crime, suffered from severe mental illness, and spent most of his time holed up in his room playing video games.</p><p><img src="https://s2.loli.net/2023/10/30/aAVL1vUQtF7On5w.png" alt="image.png"><br><strong>A photo taken at the killer’s home</strong></p><p>Though the 48-page final report on the shooting, released by the Connecticut State’s Attorney in November 2013, did not link video games to the motive for the shooting, but merely stated a few facts.</p><p>According to the report, the killer’s library contained both “violent” games such as Battlefield, Call of Duty, Grand Theft Auto, and Survival Road, as well as a collection of “nonviolent” games. The killer spent most of his time playing “non-violent” games, with Super Mario Bros. being his favorite. He also frequented a movie theater where he played the Dance Dance Revolution arcade game, “moving my feet rhythmically in response to on-screen prompts”.</p><p><img src="https://s2.loli.net/2023/10/30/y5X8ORHFpNGZYv4.png" alt="image.png"></p><p>Some of the killer’s gaming inventory</p><p>But the NRA is intent on shifting the conflict and changing the social agenda, and Remington is bearing the brunt of it.</p><p>The families of the nine victims and one surviving teacher launched a lawsuit against Remington. In their suit, they argued that Remington’s marketing through video games “appealed to the insecure and lonely and made them become like the shooter, bent on mass murder.” Ten years later, in 2022, they announced a settlement with Remington for $73 million.</p><p>Remington was already suffering from internal mismanagement and was in debt, and the sudden lawsuit added to Remington’s financial situation. This was partly due to the high legal costs and partly due to the negative public perception of Remington that resulted in investors announcing divestment. Eventually, Remington filed for bankruptcy in 2018 and again in 2020, and its assets were divided and sold to multiple sellers.</p><p>So what happened to ACR, which executives had high hopes for? Sadly, in-game popularity hasn’t translated into real-world sales, let alone saving Remington’s finances.</p><p>At launch in 2010, ACR offered a suggested retail price that was twice as expensive as earlier offers, prompting consumer protests. It didn’t take long for the manufacturer to discover a design flaw in the ACR that “caused multiple rounds to continue to fire when the trigger was pulled” and had to recall the product.</p><p>That same Trull previously said in a Wall Street Journal interview that the ACR “was discontinued after years of consistently low sales.” “The fact that this rifle is so popular in Call of Duty is shocking …… It’s basically the only positive thing people have said about ACR.”</p><p>Even with Remington’s bankruptcy and the discontinuation of the ACR, the ACR in Call of Duty is still alive and well. both the ACR and ACR 6.8 assault rifles, both of which made a comeback in the recent Beta testing of Call of Duty: Modern Warfare III (COD20, 2023), are still known for their low recoil, just with a name change to the MCW.</p><p><img src="https://s2.loli.net/2023/10/30/kUMv4cD57WmFIil.png" alt="image.png"></p><h2 id="American-Bar-Association’s-Legal-Guide-to-Video-Game-Development"><a href="#American-Bar-Association’s-Legal-Guide-to-Video-Game-Development" class="headerlink" title="American Bar Association’s Legal Guide to Video Game Development"></a>American Bar Association’s Legal Guide to Video Game Development</h2><p>And in the lawsuit against Remington, lawyers for the victims’ side obtained internal Remington documents that completely solidified the existence of this symbiotic relationship.</p><p>Remington’s strategy of trying to market firearms to a younger audience sounds unrealistic. However, the United States of America has its own country here, and Americans do develop a brand preference for firearms, just as we develop a brand preference over electronics or game manufacturers. And it’s not unlikely that this preference will be cultivated from a young age.</p><p>In the wake of the Sandy Hook tragedy, the foreign media outlet Eurogamer looked for a more specific audience for gun ads: 13-year-old American kid Smith.</p><p>He loved guns, owning nearly a dozen BB guns and shooting a real M1911 pistol with his country grandfather. He also plays Call of Duty, and his favorite gun in the game is Remington’s MSR sniper rifle. “It’s a really nice, accurate sniper rifle that rarely misses. I think once I get older, I’d like to have a real one.”</p><p>We don’t really know if Smith, now an adult, will ever buy a Remington gun again. But for now, it seems that Remington has moved the needle and ended the symbiotic relationship between guns and games in some form. Fewer and fewer American kids like Smith will likely fall in love with a particular gun brand because of a game.</p><p>After 2013, a whole host of games, including Call of Duty and Battlefield, that were poised to see the U.S. as a major market, had shooter themes, or included firearms elements, were careful to rarely use realistic weapon names and full-scale models anymore. Even the Kalashnikov assault rifle, which is difficult to trace the copyright of and is often referred to as the “AK47”, is no longer referred to as the “AK” in Call of Duty: Modern Warfare II (COD 19, 2022), but rather as the “Kastov.</p><p><img src="https://s2.loli.net/2023/10/30/sWt65x8RZ7vwL1U.png" alt="image.png"></p><p>In addition to the two explanations of laziness and high royalties, the desperation to get rid of the link with gun violence is a major reason for the excessive caution of American game makers.</p><p>A 2019 story in The Atlantic quoted Ross Dannenberg, author of the American Bar Association’s Legal Guide to Video Game Development, as saying, “Some game companies have a policy of ‘We won’t apply for a firearms license ‘ …… The only reason they make such statements about firearms is that they don’t want to be found guilty in the court of public opinion for supporting the gun industry.”</p><p>Meanwhile, manufacturers outside of the United States are virtually unaffected. The likes of France’s Ubisoft have never shied away from introducing large numbers of realistic firearms into shooters like Crysis. Russia’s BSG Studios’ Escape from Tarkov is sitting on the rights of major arms manufacturers (including the Kalashnikov Group) and even accessory makers, and it’s the royalties that have brought the game’s price tag up.</p><p><img src="https://s2.loli.net/2023/10/30/56APedik1CZzXoT.png" alt="image.png"></p><p><strong>Ubisoft’s FPS Unruly Alliance in open beta</strong></p><p><strong>There’s an ACR 6.8 in the arsenal.</strong></p><p>When gamers regret and criticize the actions of a shooter that does not try to restore realistic firearms, they also need to understand the real situation of the moment. Gun violence, like geopolitics and diversity, is a sensitive issue. Game makers’ concerns about these issues are gradually reshaping every game we play.</p><p>But I’m afraid that real issues have to be put into reality to be properly addressed. Sueing a Remington and eliminating advertisements for gun manufacturers in games is not enough to calm the grief that school shootings have brought to Americans, nor is it enough to prevent similar tragedies from happening again.</p>]]></content>
    
    
    <summary type="html">At E3 2009, Call of Duty:Modern Warfare 2 (COD6) served up an unmistakably classic episodic campaign demo that is still talked about by FPS veterans</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Gaming Strategy" scheme="https://www.nablepart.com/categories/Gaming-Strategy/"/>
    
    
    <category term="Modern Warfare 2" scheme="https://www.nablepart.com/tags/Modern-Warfare-2/"/>
    
    <category term="Cliffhanger" scheme="https://www.nablepart.com/tags/Cliffhanger/"/>
    
    <category term="COD6" scheme="https://www.nablepart.com/tags/COD6/"/>
    
    <category term="GameFAQs" scheme="https://www.nablepart.com/tags/GameFAQs/"/>
    
    <category term="ACR" scheme="https://www.nablepart.com/tags/ACR/"/>
    
    <category term="Battlefield" scheme="https://www.nablepart.com/tags/Battlefield/"/>
    
    <category term="Call of Duty" scheme="https://www.nablepart.com/tags/Call-of-Duty/"/>
    
    <category term="Grand Theft Auto" scheme="https://www.nablepart.com/tags/Grand-Theft-Auto/"/>
    
    <category term="Survival Road" scheme="https://www.nablepart.com/tags/Survival-Road/"/>
    
  </entry>
  
  <entry>
    <title>React Native Split bundle and Loading</title>
    <link href="https://www.nablepart.com/beb98ab315d2/"/>
    <id>https://www.nablepart.com/beb98ab315d2/</id>
    <published>2023-10-28T06:31:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Understanding-what-js-looks-like-in-memory"><a href="#Understanding-what-js-looks-like-in-memory" class="headerlink" title="Understanding what js looks like in memory"></a>Understanding what js looks like in memory</h2><p>Now we have a base package and multiple business packages, but how do we go about loading these RN packages? First we need to see what js looks like once it’s loaded. We can view jscontext through the developer menu in safari or safari technology preview.</p><image src="../assets/rn-6.png"/><p>We found that after loading is completed is to read the specified file directory into memory, so what do we need to do to read this file into memory?</p><p>Option 1, the native side reads out the js file, passes it to the js side through a notification, and the js side executes the code through eval.</p><p>Option 2: Look at how RCTBridge loads the js code on the native side, and we load it in this way.</p><h2 id="Load-the-unpacked-code-into-memory"><a href="#Load-the-unpacked-code-into-memory" class="headerlink" title="Load the unpacked code into memory"></a>Load the unpacked code into memory</h2><p>No need to choose, surely option 2 is a bit more reliable. So we found these two methods.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">- (void)executeSourceCode:(NSData *)sourceCode sync:(BOOL)sync;</span><br><span class="line">- (void)executeSourceCode:(NSData *)sourceCode bundleUrl:(NSURL *)bundleUrl sync:(BOOL)sync;</span><br></pre></td></tr></table></figure><p>What’s the difference? As you can see from the parameters one doesn’t set the path to the jsbundle and one provides a parameter to set that path. Let’s choose the method with one more parameter to use.</p><p>So we use the js package we just packed out, and load it through these two methods. We get the following loading result.</p><image src="../assets/rn-7.png"/><p>We successfully loaded two RN packages into memory.</p><h2 id="Problems-encountered"><a href="#Problems-encountered" class="headerlink" title="Problems encountered"></a>Problems encountered</h2><p>It went so well, how could it have gone so well? </p><p>There are actually other considerations here:</p><h3 id="The-base-package-has-to-be-loaded-first"><a href="#The-base-package-has-to-be-loaded-first" class="headerlink" title="The base package has to be loaded first"></a>The base package has to be loaded first</h3><p>The base package is the cornerstone of the business package, you must ensure that the base package loaded before you can load the business package, otherwise there will be a lot of exceptions. How to ensure that the base package is loaded? </p><ol><li><p>There is a js code loading notification mentioned above: RCTJavaScriptDidLoadNotification. </p></li><li><p>By determining whether the isLoading property of the RCTBridge has changed to false.</p></li></ol><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">while (gRCTBridge.isLoading) &#123;//侦听基础包是否加载完成 阻塞后续逻辑</span><br><span class="line">     &#125;</span><br></pre></td></tr></table></figure><h3 id="Execute-code-in-js-thread"><a href="#Execute-code-in-js-thread" class="headerlink" title="Execute code in js thread"></a>Execute code in js thread</h3><p>Code execution must be done in the js thread, otherwise there will be some inexplicable crashing problems.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[gRCTBridge.batchedBridge dispatchBlock:^&#123;</span><br><span class="line">      [gRCTBridge.batchedBridge executeSourceCode:source.data bundleUrl:bundUrl sync:YES];</span><br><span class="line"> &#125; queue:RCTJSThread];</span><br></pre></td></tr></table></figure><h3 id="After-loading-the-js-code-first-then-create-the-RCTRootView"><a href="#After-loading-the-js-code-first-then-create-the-RCTRootView" class="headerlink" title="After loading the js code first, then create the RCTRootView"></a>After loading the js code first, then create the RCTRootView</h3><p>First make sure the js code is loaded before creating the RCTRootView, otherwise it will throw a require unknown module exception.</p><h3 id="Benefits-of-Split-bundle"><a href="#Benefits-of-Split-bundle" class="headerlink" title="Benefits of Split bundle"></a>Benefits of Split bundle</h3><p>Having done all this, what exactly are the advantages of unpacking that make it worthwhile?</p><ul><li><p>Split business logic, reduce business coupling</p></li><li><p>Split the business into multiple instances to load, to avoid the collapse of part of the business caused by the whole app is not available</p></li><li><p>Reduce the waste of resources brought about by hot updates, split RN packages into basic packages, public packages and business packages, infrequently updated basic packages, public packages and business packages. package, the infrequently updated basic and public packages are excluded from the business package, which reduces the size of the business package and greatly reduces the bandwidth consumption when updating.</p></li></ul>]]></content>
    
    
    <summary type="html">React Native Unpacking and Loading</summary>
    
    
    
    <category term="Front end" scheme="https://www.nablepart.com/categories/Front-end/"/>
    
    <category term="RN" scheme="https://www.nablepart.com/categories/Front-end/RN/"/>
    
    
    <category term="RN" scheme="https://www.nablepart.com/tags/RN/"/>
    
    <category term="React Native" scheme="https://www.nablepart.com/tags/React-Native/"/>
    
    <category term="React" scheme="https://www.nablepart.com/tags/React/"/>
    
    <category term="JavascriptCore" scheme="https://www.nablepart.com/tags/JavascriptCore/"/>
    
    <category term="JSI" scheme="https://www.nablepart.com/tags/JSI/"/>
    
    <category term="Split" scheme="https://www.nablepart.com/tags/Split/"/>
    
    <category term="loading" scheme="https://www.nablepart.com/tags/loading/"/>
    
  </entry>
  
  <entry>
    <title>React Native Packaging and split</title>
    <link href="https://www.nablepart.com/d1c8943b326c/"/>
    <id>https://www.nablepart.com/d1c8943b326c/</id>
    <published>2023-10-28T05:51:10.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Packaging"><a href="#1-Packaging" class="headerlink" title="1. Packaging"></a>1. Packaging</h2><p>react-native-cli provides bundle command, we can work with metro.config.js to configure the bundle parameters.</p><p>After installing react-native-cli, execute react-native bundle -h in the root directory of the rn project to get the following help message.</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">Options:</span><br><span class="line">  --entry-file &lt;path&gt;                Path to the root JS file, either absolute or relative to JS root</span><br><span class="line">  --platform [string]                Either &quot;ios&quot; or &quot;android&quot; (default: &quot;ios&quot;)</span><br><span class="line">  --transformer [string]             Specify a custom transformer to be used</span><br><span class="line">  --dev [boolean]                    If false, warnings are disabled and the bundle is minified (default: true)</span><br><span class="line">  --minify [boolean]                 Allows overriding whether bundle is minified. This defaults to false if dev is true, and true if dev is false. Disabling minification can be useful for speeding up production builds for testing purposes.</span><br><span class="line">  --bundle-output &lt;string&gt;           File name where to store the resulting bundle, ex. /tmp/groups.bundle</span><br><span class="line">  --bundle-encoding [string]         Encoding the bundle should be written in (https://nodejs.org/api/buffer.html#buffer_buffer). (default: &quot;utf8&quot;)</span><br><span class="line">  --max-workers [number]             Specifies the maximum number of workers the worker-pool will spawn for transforming files. This defaults to the number of the cores available on your machine.</span><br><span class="line">  --sourcemap-output [string]        File name where to store the sourcemap file for resulting bundle, ex. /tmp/groups.map</span><br><span class="line">  --sourcemap-sources-root [string]  Path to make sourcemap&#x27;s sources entries relative to, ex. /root/dir</span><br><span class="line">  --sourcemap-use-absolute-path      Report SourceMapURL using its full path</span><br><span class="line">  --assets-dest [string]             Directory name where to store assets referenced in the bundle</span><br><span class="line">  --reset-cache                      Removes cached files</span><br><span class="line">  --read-global-cache                Try to fetch transformed JS code from the global cache, if configured.</span><br><span class="line">  --config [string]                  Path to the CLI configuration file</span><br></pre></td></tr></table></figure><p>A few parameters we need to use when packaging are:</p><p>–entry-file: the entry file, usually use index.js in the root directory as the entry file.</p><p>–platform: the platform for packaging, ios or android.</p><p>–dev: if or not in debug mode, default is debug mode, so you need to set it to false.</p><p>–minify: if or not the code should be compressed, when dev is true, minify will be false, when dev is false, minify will be true.</p><p>—bundle-output: path to store the packaged jsbundle.</p><p>–assets-dest: path to store the packaged resources, this path will generate the relative path of the resource files in the jsbundle, try to set it in the same directory as the jsbundle.</p><p>—sourcemap-output: path to output sourcemap, if not added, it will not output sourcemap.</p><p>–config: matroconfig configuration file, used to configure packaging information, can do some unpacking configuration.</p><h2 id="2-Split-bundle"><a href="#2-Split-bundle" class="headerlink" title="2. Split bundle"></a>2. Split bundle</h2><p>By default, we can type a complete RN package directly with the following command.</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">react-native bundle --entry-file ./index.js --platform ios --dev false --bundle-output ./build/ios/main.jsbundle --assets-dest ./build/ios/assets</span><br></pre></td></tr></table></figure><p>However, for a large project, many businesses are divided into business groups according to product lines or other scenarios, and it is not suitable for the whole application to hit a complete package. We can use something like Doodle Intelligence, each panel to create a RCTRootView and maintained by a separate RCTBridge form, the advantage is that there is no need to unpack, each panel business environment is independent, there is no variable pollution. The disadvantage is that each business package is very large, the download consumes resources, each panel creates a set of independent js runtime environment, it is not suitable for too many panels running at the same time.</p><p>We can also use the form of unpacking loading, each panel business is split into three parts.</p><ol><li><p>RN framework part, this part of the RN framework for the core code, need to be loaded before the panel start to ensure that the subsequent loading logic is normal. This part will generally not change, you can follow the app version of the release iteration.</p></li><li><p>Public library part, such as some third-party dependencies, tuya-panel-kit and so on. This part can be released as a separate hot update version to fix the bugs in the public libraries.</p></li><li><p>Business part, this part is business code, which needs to be changed according to business requirements, and some non-public third-party library dependencies may be added.</p></li></ol><p>For simplicity, we combine 1 and 2 into one module, which we call the base package. 3 as the business part is called the business package.</p><p>First we generate a configuration file for the base package, specifying which files need to be packaged into the base package and how the ids of each module are defined during the packaging process.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> pathSep = <span class="built_in">require</span>(<span class="string">&#x27;path&#x27;</span>).<span class="property">sep</span>;</span><br><span class="line"><span class="keyword">function</span> <span class="title function_">createModuleIdFactory</span>(<span class="params"></span>) &#123; <span class="comment">// 获取每个模块的id如何定义</span></span><br><span class="line">  <span class="keyword">const</span> projectRootPath = __dirname;<span class="comment">//获取命令行执行的目录，__dirname是nodejs提供的变量</span></span><br><span class="line">  <span class="keyword">return</span> <span class="function"><span class="params">path</span> =&gt;</span> &#123;</span><br><span class="line">    <span class="keyword">let</span> name = <span class="string">&#x27;&#x27;</span>;</span><br><span class="line">    <span class="keyword">if</span>(path.<span class="title function_">indexOf</span>(<span class="string">&#x27;node_modules&#x27;</span>+pathSep+<span class="string">&#x27;react-native&#x27;</span>+pathSep+<span class="string">&#x27;Libraries&#x27;</span>+pathSep)&gt;<span class="number">0</span>)&#123;</span><br><span class="line">      name = path.<span class="title function_">substr</span>(path.<span class="title function_">lastIndexOf</span>(pathSep)+<span class="number">1</span>);<span class="comment">//这里是去除路径中的&#x27;node_modules/react-native/Libraries/‘之前（包括）的字符串，可以减少包大小，可有可无</span></span><br><span class="line">    &#125;<span class="keyword">else</span> <span class="keyword">if</span>(path.<span class="title function_">indexOf</span>(projectRootPath)==<span class="number">0</span>)&#123;</span><br><span class="line">      name = path.<span class="title function_">substr</span>(projectRootPath.<span class="property">length</span>+<span class="number">1</span>);<span class="comment">//这里是取相对路径，不这么弄的话就会打出_user_smallnew_works_....这么长的路径，还会把计算机名打进去</span></span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> name; <span class="comment">// 这里我们使用文件的相对路径作为每个模块的moduleId</span></span><br><span class="line">  &#125;;</span><br><span class="line">&#125;</span><br><span class="line"><span class="keyword">function</span> <span class="title function_">processModuleFilter</span>(<span class="params"><span class="variable language_">module</span></span>)&#123;<span class="comment">// 过滤该module是否需要打包</span></span><br><span class="line">  <span class="keyword">if</span> (</span><br><span class="line">         <span class="variable language_">module</span>.<span class="property">path</span>.<span class="title function_">indexOf</span>(<span class="string">&#x27;node_modules&#x27;</span>) &gt; -<span class="number">1</span> <span class="comment">// node_modules下被引用到的文件需要打包</span></span><br><span class="line">  )&#123;</span><br><span class="line">    <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">  &#125;</span><br><span class="line">  <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line">&#125;</span><br><span class="line"><span class="keyword">function</span> <span class="title function_">getRunModuleStatement</span>(<span class="params">entryFilePath</span>)&#123;</span><br><span class="line">  <span class="keyword">return</span> <span class="string">`__r(&#x27;indexCore$js&#x27;)`</span><span class="comment">// 配置最后需要require执行的入口文件</span></span><br><span class="line">&#125;</span><br><span class="line"><span class="variable language_">module</span>.<span class="property">exports</span> = &#123;</span><br><span class="line">  <span class="attr">transformer</span>: &#123;</span><br><span class="line">    <span class="attr">getTransformOptions</span>: <span class="title function_">async</span> () =&gt; (&#123;</span><br><span class="line">      <span class="attr">transform</span>: &#123;</span><br><span class="line">        <span class="attr">experimentalImportSupport</span>: <span class="literal">false</span>,</span><br><span class="line">        <span class="attr">inlineRequires</span>: <span class="literal">false</span>,</span><br><span class="line">      &#125;,</span><br><span class="line">    &#125;),</span><br><span class="line">  &#125;,</span><br><span class="line">  <span class="attr">serializer</span>: &#123;</span><br><span class="line">    <span class="attr">createModuleIdFactory</span>:createModuleIdFactory,</span><br><span class="line">    <span class="attr">processModuleFilter</span>:processModuleFilter,</span><br><span class="line">    <span class="attr">getRunModuleStatement</span>:getRunModuleStatement</span><br><span class="line">  &#125;</span><br><span class="line">&#125;;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Once the configuration is complete, we use it for packaging.</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">react-native bundle --config ./core.config.js  --entry-file ./indexCore.js --platform $&#123;PLATFORM&#125; --dev false --bundle-output $&#123;OUTPUT_PATH&#125;/$&#123;BIZ_FOLDER&#125;.$&#123;PLATFORM&#125;.js  --assets-dest $&#123;OUTPUT_PATH&#125;</span><br></pre></td></tr></table></figure><p>After typing the RN package, we look at how the contents differ from the RN package when packaged directly.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;<span class="string">&#x27;use strict&#x27;</span>;m.<span class="property">exports</span>=<span class="title function_">r</span>(d[<span class="number">0</span>])&#125;,<span class="number">3</span>,[<span class="number">4</span>]);</span><br><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;<span class="string">&#x27;use strict&#x27;</span>;m.<span class="property">exports</span>=<span class="title function_">r</span>(d[<span class="number">0</span>])&#125;,<span class="string">&quot;node_modules/react/index.js&quot;</span>,[<span class="string">&quot;node_modules/react/cjs/react.production.min.js&quot;</span>]);</span><br></pre></td></tr></table></figure><p>At this point, look at the packaged code, the original digital definition of the moduleId into the relative path to the file definition of the moduleId, is not a clearer and better understanding of it.</p><p>Similarly, we modify the processModuleFilter filtering rules to play out multiple business packages. Packaging and unpacking part of the realization is complete.</p>]]></content>
    
    
    <summary type="html">React Native Knowledge Points Explained</summary>
    
    
    
    <category term="Front end" scheme="https://www.nablepart.com/categories/Front-end/"/>
    
    <category term="RN" scheme="https://www.nablepart.com/categories/Front-end/RN/"/>
    
    
    <category term="RN" scheme="https://www.nablepart.com/tags/RN/"/>
    
    <category term="React Native" scheme="https://www.nablepart.com/tags/React-Native/"/>
    
    <category term="React" scheme="https://www.nablepart.com/tags/React/"/>
    
    <category term="JavascriptCore" scheme="https://www.nablepart.com/tags/JavascriptCore/"/>
    
    <category term="JSI" scheme="https://www.nablepart.com/tags/JSI/"/>
    
  </entry>
  
  <entry>
    <title>React Native Hot Updates and Incremental Updates</title>
    <link href="https://www.nablepart.com/56fba6ad76ce/"/>
    <id>https://www.nablepart.com/56fba6ad76ce/</id>
    <published>2023-10-28T05:11:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Hot-update"><a href="#Hot-update" class="headerlink" title="Hot update"></a>Hot update</h2><h3 id="What-is-hot-update"><a href="#What-is-hot-update" class="headerlink" title="What is hot update"></a>What is hot update</h3><p>With the unpacking and loading logic, a hot update is really just downloading a new business package and replacing the old one.</p><h3 id="How-to-ensure-that-a-hot-updated-version-is-available"><a href="#How-to-ensure-that-a-hot-updated-version-is-available" class="headerlink" title="How to ensure that a hot updated version is available"></a>How to ensure that a hot updated version is available</h3><p>But what if the hot update keeps crashing the business package?</p><p>Option 1: Release a bugfix version to fix the hot update. Prerequisite: R&amp;D notices the online crash.</p><p>Option 2: Set up online rollback to previous version.</p><p>Option 3: Set up trial run mechanism locally, and set up three version directories locally: history version, trial run version and official version. The hot update version will be added to the trial run version first, and then moved to the normal version directory after the first successful run, and then run in the official version directory in the following days. If the trial run crashes continuously up to the maximum number of times (e.g. 2 times), then roll back the previous version (find the last runnable version in the history version directory) and record the failure of this hot update, and try the hot update again, if it also reaches the maximum number of times and still crashes, then give up this hot update version until the next hot update version is released.</p><h3 id="What’s-the-time-to-update"><a href="#What’s-the-time-to-update" class="headerlink" title="What’s the time to update"></a>What’s the time to update</h3><p>There are many options for hot updating, such as updating all business packages when starting the app, polling regularly to update all business packages, pushing to update business packages, checking and updating the current business packages when starting the business, and so on.</p><h4 id="Update-all-business-packs-when-launching-the-app"><a href="#Update-all-business-packs-when-launching-the-app" class="headerlink" title="Update all business packs when launching the app"></a>Update all business packs when launching the app</h4><p>The business logic is the simplest, after the start of the App first request a hot update interface to query whether there is a new version of the existing business package in the App, if there is a new version, then download the new version and wait for the run, if the business has not yet been launched after the start of the App, then use the new version to start the business package, if it has been launched, then use the new version of the business package next time you start.</p><p>Advantages: Update after startup, simple business logic processing, can ensure that in most scenarios when users enter the business to use the latest version of the business package.</p><p>Disadvantages: Increase the time consuming to start the app, when entering the business, the hot update may not be completed, and the old version of the business package is still used. For some users who do not often kill the process to exit the application is not friendly.</p><h4 id="Timed-polling-to-update-all-business-packs"><a href="#Timed-polling-to-update-all-business-packs" class="headerlink" title="Timed polling to update all business packs"></a>Timed polling to update all business packs</h4><p>For applications that release hot updates more frequently, polling and downloading hot update packages at regular intervals is a common solution to ensure the business version update rate.</p><p>Advantages: Maximizes the reach of hot updates.</p><p>Disadvantages: The test of the hot update interface is large, and the peak availability of the request needs to be guaranteed by technical means. Timed requests have more invalid requests, wasting resources.</p><h4 id="Push-to-update-business-packs"><a href="#Push-to-update-business-packs" class="headerlink" title="Push to update business packs"></a>Push to update business packs</h4><p>For most scenarios, push is a better way to update only when a version is available, without invalid requests and ensuring real-time version updates.</p><p>Advantages: Timely version update, no waste of resources caused by polling.</p><p>Disadvantages: The compatibility of Android push is poor, and there is no guarantee that users will turn on app push.</p><h4 id="Update-the-current-service-package-when-starting-a-service"><a href="#Update-the-current-service-package-when-starting-a-service" class="headerlink" title="Update the current service package when starting a service"></a>Update the current service package when starting a service</h4><p>On-demand update, only detects and downloads the new version when the user starts the service, and uses the new version to enter the service after the update is completed.</p><p>Advantages: Ensure that users are always using the latest version.</p><p>Disadvantages: Every time you start the service, you have to check whether there is a new version update, there is a waste of resources; when there is a new version, the user needs to stop at the download page, and can enter the service only after the download is completed, so the user experience is poor.</p><h2 id="incremental-update"><a href="#incremental-update" class="headerlink" title="incremental update"></a>incremental update</h2><p>The hot update feature is good enough for what we use, but there is still a waste of resources. Every time we update our business, we have to download a full size business package, but the changes to the business package may only be one line of code. So how do we go about reducing costs again.</p><p>Students who have done Android development should know that Android has a hot fix function, that is, the patch package, the contents of the current revision and the last comparison, split out the difference between the part of the generation of a patch package, the old version of the application to download the patch package, will be merged into the current version of the app, you get a new version of the app, most of the app stores have adopted a this technology.</p><p>We also have a diff method in git that allows us to see the differences between two files. In fact, incremental updates are similar to this git diff technique. In fact, incremental updating is similar to the git diff technique. It is to iterate through all the files in the business package that we have typed out, find out the files with differences, generate patch packages and compress them into a collection of patch packages of the corresponding version. When the app downloads the patch, it merges it with the corresponding version of the business package, and after the merge is complete, you get the new version of the business package.</p><p>How big is the difference between incremental update and full update?</p><p>The content of the business package before modification:</p><image src="../assets/rn-8.png" /><p>Modified business package content:</p><image src="../assets/rn-9.png" /><p>Modified full package size</p><image src="../assets/rn-10.png" /><p>Modified incremental package size</p><image src="../assets/rn-11.png" /><p>How big is a full volume package if you don’t do unpacking?</p><image src="../assets/rn-12.png" /><p>The demo project here only has a few simple pages, no third-party dependencies are imported. Imagine if you imported third-party dependencies and added some infrequently changing image resources, what would the ratio be.</p><h2 id="How-to-load-a-new-RN-package"><a href="#How-to-load-a-new-RN-package" class="headerlink" title="How to load a new RN package"></a>How to load a new RN package</h2><p>Will downloading the RN package and loading the business directly with the new RN package take effect? Actually it doesn’t.</p><p>As we all know, the dependencies we introduce in the file through import or require will eventually be converted into the form of reqiure, and require is defined in the metro. For example, after we modify and save a code file in an RN framework that doesn’t support hot reload, we need to reload the whole project for it to take effect, this is because the require method has a caching policy.</p><p>We open the require.js file, we will find the following logic.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> <span class="variable language_">module</span> = modules[moduleIdReallyIsNumber];</span><br><span class="line"><span class="keyword">return</span> <span class="variable language_">module</span> &amp;&amp; <span class="variable language_">module</span>.<span class="property">isInitialized</span></span><br><span class="line">  ? <span class="variable language_">module</span>.<span class="property">publicModule</span>.<span class="property">exports</span></span><br><span class="line">  : <span class="title function_">guardedLoadModule</span>(moduleIdReallyIsNumber, <span class="variable language_">module</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>We can see from the logic that require will only read and load the corresponding code if the module with the corresponding moduleId has not been defined, but if it has been defined, it will go to the data in the cache. Therefore, if we hot update a business package, if we have started it once before, we will not use the logic in the new business package again. So what do we do?</p><p>Through the above code, we found that the module code will be cached in the modules variable, so when we exit the business, the relevant module clean up can not be? But what time to clean up?</p><p>Here we use a familiar and unfamiliar Api, AppRegistry. Open the AppRegistry document and we find an application uninstallation api, unmountApplicationComponentAtRootTag, which will be called when the application exits the uninstallation. This api will be called when the app exits and uninstalls. But what things to clean up, is not necessary for us to record, can not be all the modules empty it. We can see the runApplication method, which is the js method called by RCTRootView when it calls start, and it is the starting point of the application. It is the starting point of the application. We can save the modules and the names of the modules that the application depends on according to the entry parameter.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> initRunApplication = <span class="title class_">AppRegistry</span>.<span class="property">runApplication</span>;</span><br><span class="line"><span class="keyword">const</span> initUnmountApplicationComponentAtRootTag = <span class="title class_">AppRegistry</span>.<span class="property">unmountApplicationComponentAtRootTag</span>;</span><br><span class="line"><span class="title class_">AppRegistry</span>.<span class="property">runApplication</span> = <span class="function">(<span class="params">appKey, appParameters</span>) =&gt;</span> &#123;</span><br><span class="line">  <span class="keyword">const</span> &#123; rootTag, <span class="attr">initialProps</span>: &#123; bundleName, bundleUrl, entry &#125; &#125; = appParameters;</span><br><span class="line">  definedBizModules[rootTag] = &#123; entry, <span class="attr">prefix</span>: <span class="string">`src/pages/<span class="subst">$&#123;bundleName&#125;</span>/`</span> &#125;;</span><br><span class="line">  <span class="keyword">new</span> <span class="title class_">SourceTransformer</span>(bundleUrl, bundleName)</span><br><span class="line">  <span class="title function_">initRunApplication</span>(appKey, appParameters);</span><br><span class="line">&#125;</span><br><span class="line"><span class="title class_">AppRegistry</span>.<span class="property">unmountApplicationComponentAtRootTag</span> = <span class="function">(<span class="params">rootTag</span>) =&gt;</span> &#123;</span><br><span class="line">  <span class="title function_">initUnmountApplicationComponentAtRootTag</span>(rootTag);</span><br><span class="line">  <span class="keyword">const</span> &#123; entry, prefix &#125; = definedBizModules[rootTag];</span><br><span class="line">  <span class="variable language_">console</span>.<span class="title function_">log</span>(<span class="string">&#x27;unmount&#x27;</span>, entry)</span><br><span class="line">  <span class="variable language_">global</span>.<span class="title function_">__destroyModules</span>(entry, prefix)</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>In order to keep the application running properly, we need to keep the original method and execute it as soon as our hook code finishes executing.</p><p>How global.__destroyModules is defined, this is where we need to modify some code in the require.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> destroyModules = <span class="keyword">function</span> (<span class="params">dModule, prefix</span>) &#123;</span><br><span class="line">   <span class="keyword">if</span> (<span class="keyword">typeof</span> dModule === <span class="string">&#x27;string&#x27;</span>) &#123;</span><br><span class="line">     <span class="keyword">delete</span> modules[dModule];</span><br><span class="line">     <span class="title class_">Object</span>.<span class="title function_">keys</span>(modules).<span class="title function_">forEach</span>(<span class="function">(<span class="params">item,i</span>)=&gt;</span>&#123;</span><br><span class="line">         <span class="keyword">if</span>(item.<span class="title function_">startsWith</span>(prefix))&#123;</span><br><span class="line">           <span class="keyword">delete</span> modules[item];</span><br><span class="line">         &#125;</span><br><span class="line">     &#125;)</span><br><span class="line">   &#125;</span><br><span class="line"> &#125;;</span><br><span class="line"> <span class="variable language_">global</span>.<span class="property">__destroyModules</span> = destroyModules;</span><br></pre></td></tr></table></figure><p>We’ll clean up all the mods under the specified path, and the next time we go back in, they’ll be loaded properly.</p>]]></content>
    
    
    <summary type="html">React Native Hot Updates and Incremental Updates</summary>
    
    
    
    <category term="Front end" scheme="https://www.nablepart.com/categories/Front-end/"/>
    
    <category term="RN" scheme="https://www.nablepart.com/categories/Front-end/RN/"/>
    
    
    <category term="RN" scheme="https://www.nablepart.com/tags/RN/"/>
    
    <category term="React Native" scheme="https://www.nablepart.com/tags/React-Native/"/>
    
    <category term="React" scheme="https://www.nablepart.com/tags/React/"/>
    
    <category term="JavascriptCore" scheme="https://www.nablepart.com/tags/JavascriptCore/"/>
    
    <category term="JSI" scheme="https://www.nablepart.com/tags/JSI/"/>
    
    <category term="Split" scheme="https://www.nablepart.com/tags/Split/"/>
    
    <category term="loading" scheme="https://www.nablepart.com/tags/loading/"/>
    
    <category term="bsdiff" scheme="https://www.nablepart.com/tags/bsdiff/"/>
    
    <category term="Differential updating" scheme="https://www.nablepart.com/tags/Differential-updating/"/>
    
  </entry>
  
  <entry>
    <title>React Native Knowledge Points Explained</title>
    <link href="https://www.nablepart.com/db091fb6abd8/"/>
    <id>https://www.nablepart.com/db091fb6abd8/</id>
    <published>2023-10-27T03:21:43.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-React-Native-基本概念"><a href="#1-React-Native-基本概念" class="headerlink" title="1. React Native 基本概念"></a>1. React Native 基本概念</h2><h3 id="1-1-JavascriptCore-JSC"><a href="#1-1-JavascriptCore-JSC" class="headerlink" title="1.1 JavascriptCore(JSC)"></a>1.1 JavascriptCore(JSC)</h3><p>To ensure that the RN code runs, the first thing you need is a set of runtime environment for the JS code, which is JavascriptCore, but when debugging in chrome, since the JS code is all executed in chrome’s V8 engine, it leads to differences between some of the code in debug mode and non-debug mode.</p><p>For example:</p><ol><li><p>Some date functions are not implemented in ios, for example, in IOS it is not possible to convert the date 2021-08-26 to Date object, but you need to replace “-“ with “&#x2F;“ and then do the conversion, but you will not have this problem when debugging in chrome. </p></li><li><p>Android does not report an error if there is an illegal white space character that is not under the Text tag in a debug situation, but will report a red screen in a non-debug environment. Therefore, you need to understand these differences to avoid such situations.</p></li></ol><p>About the details of JavascriptCore , you can refer to the <a href="https://trac.webkit.org/wiki/JavaScriptCore">official document</a> , which describes in detail the components of the JSC and the role of each part .</p><h3 id="1-2-JSI-Javascript-Interface"><a href="#1-2-JSI-Javascript-Interface" class="headerlink" title="1.2 JSI(Javascript Interface)"></a>1.2 JSI(Javascript Interface)</h3><p>JSI is a lightweight c++ library, through jsi, you can realize js directly to the c++ layer object and method calls. It is an intermediate adaptation layer that can run on a variety of js engines. with jsi, it makes the RN framework not only run based on JSC, but also use V8 or hermes engine. jsi was introduced in 2018 when facebook refactored the RN framework, and after its introduction, the architecture of the RN also underwent a major change, and the performance was greatly improved.</p><h3 id="1-3-jsbundle"><a href="#1-3-jsbundle" class="headerlink" title="1.3 jsbundle"></a>1.3 jsbundle</h3><p>Once we have the JS runtime, we need to load the executable code into the app. jsbundle is the JS code we need.</p><p>After the business development is completed, we will package the code with the package command provided by react-native-cli. The package script will encode the code we developed and generate a compressed bundle, such as a main.bundle js package for each panel program, and the dependent resources will be copied to the corresponding folder according to the path. folder according to the corresponding path.</p><h4 id="a-Environmental-variables-and-method-definitions"><a href="#a-Environmental-variables-and-method-definitions" class="headerlink" title="a. Environmental variables and method definitions"></a>a. Environmental variables and method definitions</h4><p>The first line of the jsbundle defines the runtime environment variable, which is used to indicate that the node environment you are running on is in production and to record when the script was started.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> __DEV__=<span class="literal">false</span>,__BUNDLE_START_TIME__=<span class="variable language_">this</span>.<span class="property">nativePerformanceNow</span>?<span class="title function_">nativePerformanceNow</span>():<span class="title class_">Date</span>.<span class="title function_">now</span>(),process=<span class="variable language_">this</span>.<span class="property">process</span>||&#123;&#125;;process.<span class="property">env</span>=process.<span class="property">env</span>||&#123;&#125;;process.<span class="property">env</span>.<span class="property">NODE_ENV</span>=<span class="string">&quot;production&quot;</span>;</span><br></pre></td></tr></table></figure><p>Lines 2 through 10 define global methods such as __d, __c, __r, setGlobalHandler, reportFatalError, etc., which are the basic methods for starting the RN environment.</p><h4 id="b-ReactNative-framework-and-business-code-definition"><a href="#b-ReactNative-framework-and-business-code-definition" class="headerlink" title="b. ReactNative framework and business code definition"></a>b. ReactNative framework and business code definition</h4><p>On line 11, we start to enter the React Native framework, third-party libraries, and personal code definition section, which defines the methods and variables in the code through the __d method defined on the second line.</p><p>The __d method accepts three parameters:</p><p>The first parameter indicates the definition of the module (typically the part of a file that is exported via export default). That is, the logic of the code in a particular code file written by us or a third-party developer.</p><p>The second parameter indicates the moduleId of the module, which needs to be referenced by this id when other modules make references to the module. The value can be a number or a string, but to ensure that each module id is unique. By default, the packaging system defines the id as an incremental number.</p><p>The third parameter represents the module’s dependencies on other modules, and is an array where each number in the array represents a dependent module.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;<span class="keyword">var</span> t=<span class="title function_">r</span>(d[<span class="number">0</span>]),n=<span class="title function_">t</span>(<span class="title function_">r</span>(d[<span class="number">1</span>]));<span class="title function_">t</span>(<span class="title function_">r</span>(d[<span class="number">2</span>]));<span class="title function_">r</span>(d[<span class="number">3</span>]);<span class="keyword">var</span> l=<span class="title function_">r</span>(d[<span class="number">4</span>]),o=<span class="title function_">t</span>(<span class="title function_">r</span>(d[<span class="number">5</span>])),s=<span class="title function_">r</span>(d[<span class="number">6</span>]),u=<span class="title function_">t</span>(<span class="title function_">r</span>(d[<span class="number">7</span>])),p=<span class="title function_">t</span>(<span class="title function_">r</span>(d[<span class="number">8</span>]));<span class="keyword">for</span>(<span class="keyword">var</span> f <span class="keyword">in</span> l.<span class="property">TextInput</span>.<span class="property">defaultProps</span>=(<span class="number">0</span>,n.<span class="property">default</span>)(&#123;&#125;,l.<span class="property">TextInput</span>.<span class="property">defaultProps</span>,&#123;<span class="attr">allowFontScaling</span>:!<span class="number">1</span>&#125;),l.<span class="property">Text</span>.<span class="property">defaultProps</span>=(<span class="number">0</span>,n.<span class="property">default</span>)(&#123;&#125;,l.<span class="property">Text</span>.<span class="property">defaultProps</span>,&#123;<span class="attr">allowFontScaling</span>:!<span class="number">1</span>&#125;),<span class="variable language_">console</span>.<span class="property">disableYellowBox</span>=!<span class="number">0</span>,l.<span class="property">UIManager</span>)l.<span class="property">UIManager</span>.<span class="title function_">hasOwnProperty</span>(f)&amp;&amp;l.<span class="property">UIManager</span>[f]&amp;&amp;l.<span class="property">UIManager</span>[f].<span class="property">directEventTypes</span>&amp;&amp;(l.<span class="property">UIManager</span>[f].<span class="property">directEventTypes</span>.<span class="property">onGestureHandlerEvent</span>=&#123;<span class="attr">registrationName</span>:<span class="string">&quot;onGestureHandlerEvent&quot;</span>&#125;,l.<span class="property">UIManager</span>[f].<span class="property">directEventTypes</span>.<span class="property">onGestureHandlerStateChange</span>=&#123;<span class="attr">registrationName</span>:<span class="string">&quot;onGestureHandlerStateChange&quot;</span>&#125;);<span class="title function_">r</span>(d[<span class="number">9</span>]),<span class="title function_">r</span>(d[<span class="number">10</span>]),<span class="title function_">r</span>(d[<span class="number">11</span>]),g.<span class="property">userStore</span>=(<span class="number">0</span>,s.<span class="property">createStore</span>)(u.<span class="property">default</span>,(<span class="number">0</span>,s.<span class="property">applyMiddleware</span>)(p.<span class="property">default</span>)),o.<span class="property">default</span>.<span class="title function_">hide</span>()&#125;,<span class="number">0</span>,[<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>,<span class="number">6</span>,<span class="number">18</span>,<span class="number">416</span>,<span class="number">417</span>,<span class="number">420</span>,<span class="number">422</span>,<span class="number">423</span>,<span class="number">656</span>,<span class="number">727</span>]);</span><br><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;m.<span class="property">exports</span>=<span class="keyword">function</span>(<span class="params">n</span>)&#123;<span class="keyword">return</span> n&amp;&amp;n.<span class="property">__esModule</span>?<span class="attr">n</span>:&#123;<span class="attr">default</span>:n&#125;&#125;&#125;,<span class="number">1</span>,[]);</span><br><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;<span class="keyword">function</span> <span class="title function_">t</span>(<span class="params"></span>)&#123;<span class="keyword">return</span> m.<span class="property">exports</span>=t=<span class="title class_">Object</span>.<span class="property">assign</span>||<span class="keyword">function</span>(<span class="params">t</span>)&#123;<span class="keyword">for</span>(<span class="keyword">var</span> n=<span class="number">1</span>;n&lt;<span class="variable language_">arguments</span>.<span class="property">length</span>;n++)&#123;<span class="keyword">var</span> o=<span class="variable language_">arguments</span>[n];<span class="keyword">for</span>(<span class="keyword">var</span> p <span class="keyword">in</span> o)<span class="title class_">Object</span>.<span class="property"><span class="keyword">prototype</span></span>.<span class="property">hasOwnProperty</span>.<span class="title function_">call</span>(o,p)&amp;&amp;(t[p]=o[p])&#125;<span class="keyword">return</span> t&#125;,t.<span class="title function_">apply</span>(<span class="variable language_">this</span>,<span class="variable language_">arguments</span>)&#125;m.<span class="property">exports</span>=t&#125;,<span class="number">2</span>,[]);</span><br><span class="line"><span class="title function_">__d</span>(<span class="keyword">function</span>(<span class="params">g,r,i,a,m,e,d</span>)&#123;<span class="string">&#x27;use strict&#x27;</span>;m.<span class="property">exports</span>=<span class="title function_">r</span>(d[<span class="number">0</span>])&#125;,<span class="number">3</span>,[<span class="number">4</span>]);</span><br></pre></td></tr></table></figure><h4 id="c-Citation-and-activation-of-entrances"><a href="#c-Citation-and-activation-of-entrances" class="headerlink" title="c. Citation and activation of entrances"></a>c. Citation and activation of entrances</h4><p>Only defining it is not enough to make our RN application run, if it is going to run, we need to reference our entry module, so the __r method is used here.</p><p>The __r method accepts a parameter which is the id of the module to be referenced, if the module is not initialized, it tries to load and initialize the module, if the module is not found, it throws the error “Requiring unkonwn module ‘xxx’”.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="title function_">__r</span>(<span class="number">104</span>);</span><br><span class="line"><span class="title function_">__r</span>(<span class="number">0</span>)</span><br></pre></td></tr></table></figure><h3 id="1-4-RCTBridge-ReactBridge"><a href="#1-4-RCTBridge-ReactBridge" class="headerlink" title="1.4 RCTBridge&#x2F;ReactBridge"></a>1.4 RCTBridge&#x2F;ReactBridge</h3><p>JS code and JS runtime are ready, how can we run these codes?</p><p>Have done or understand the RN development students know that in the RN development there is a Bridge concept is very important, it is in the js end and the native end of the role of the bridge, is the js and native communication and interaction with the basis of the entire RN lifecycle, it is mainly responsible for the following tasks.</p><p>a. Create the RN runtime</p><p>b. executing js code, loading the jsbundle and executing it</p><p>c. Maintaining dual-end communication between js and native</p><p>d. Maintaining export method tables and mapping relationships</p><h3 id="1-5-RCTRootView-RCTShadowView"><a href="#1-5-RCTRootView-RCTShadowView" class="headerlink" title="1.5 RCTRootView&#x2F;RCTShadowView"></a>1.5 RCTRootView&#x2F;RCTShadowView</h3><p>JS code, runtime, JS execution object are ready, we still need a view container to render the interface drawn in React.</p><p>At this point we need RCTRootView, RCTRootView as the root container of the RN, play a function of carrying all the sub-views, but the interface of React is not loaded directly to the RCTRootView, it has a layer of sub-view RCTRootContentView, which is the object that directly carries the view.</p><p>What is RCTShadowView? RCTShadowView is a mirror of the RCT view tree, similar to the virtual dom in React, it is responsible for maintaining the state of each view instance, when a change occurs on the js side, it will first be collected by the RCTShadowView to calculate the value of the change, and then synchronize the value to the corresponding view to be updated after the data processing is complete.</p><h3 id="1-6-RCTUIManager"><a href="#1-6-RCTUIManager" class="headerlink" title="1.6 RCTUIManager"></a>1.6 RCTUIManager</h3><p>The view container is also there, then the interface so many views, so many instances of components, by whom to manage it?</p><p>UIManager takes on the responsibility of managing native views and passing live events, view instances caused by the native side are managed by UIManager, adding a unique tag as a key value for each view instance when creating the view, so that when you need to manipulate the view on the js side, you only need to pass the tag and parameters to UIManager, and you can locate a specific view instances.</p><h3 id="1-7-RCTBridgeModule"><a href="#1-7-RCTBridgeModule" class="headerlink" title="1.7 RCTBridgeModule"></a>1.7 RCTBridgeModule</h3><p>js end and native end of how to communicate it , we should be how to define their own methods to let js call it?RN framework provides a protocol RCTBridgeModule, the implementation of this protocol can be realized on both ends of the communication.</p><p>RCTBridge and RCTUIManager both implement the RCTBridgeModule protocol, the RN environment will start scanning all the classes that implement the protocol to generate a table, the native side and JS side will save this table, it is a mapping table, with this mapping table you can let the two sides in the call method can be accurately found corresponding to the implementation.</p><h3 id="1-8-MessageQueue"><a href="#1-8-MessageQueue" class="headerlink" title="1.8 MessageQueue"></a>1.8 MessageQueue</h3><p>The communication bridge is not enough to support asynchronous operation, if all communication and UI rendering are executed synchronously, there will be a big performance bottleneck. Therefore, we need to introduce MessageQueue as a communication pool, all communication and interaction events are thrown into the pool, and then through the rules to read the pool to refresh the MessageQueue is mainly responsible for asynchronous event interaction notification tasks. This is also the reason why when calling native methods, the native side does not take effect immediately, for example, when locking the landscape operation, after calling the method of rotating to landscape, you need to do a delayed operation before locking the rotation.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="title class_">RCTOrientationManager</span>.<span class="title function_">lockOrientation</span>(<span class="string">&#x27;landscape-left&#x27;</span>);</span><br><span class="line"><span class="built_in">setTimeout</span>(<span class="function">() =&gt;</span> &#123;</span><br><span class="line">    <span class="title class_">RCTOrientationManager</span>.<span class="title function_">shouldAutorotate</span>(<span class="literal">false</span>);</span><br><span class="line">&#125;, <span class="number">200</span>);</span><br></pre></td></tr></table></figure><h2 id="2-Operation-principle-of-react-native"><a href="#2-Operation-principle-of-react-native" class="headerlink" title="2. Operation principle of react native"></a>2. Operation principle of react native</h2><p>React Native is mainly divided into two parts: one is React, that is, the JSX layer to realize the view and business logic. The second is Native, that is, the native end to take the logic implemented in the js layer for the interface rendering. The reason why these two parts can realize interoperability, rely on jsc (javascriptCore), jsc can execute js code, and through the analysis of js code implemented in the view mapped to the corresponding native components, on-demand execution of js logic code. It is because of the existence of jsc, so that we can write native applications through the js code. But jsc alone can’t run an RN application properly, it relies on a number of other implementations.</p><h3 id="2-1-RN-Overall-Architecture"><a href="#2-1-RN-Overall-Architecture" class="headerlink" title="2.1 RN Overall Architecture"></a>2.1 RN Overall Architecture</h3><p>In the jsi subsection we mentioned that the architecture of RNs has changed considerably due to the introduction of jsi.</p><h4 id="2-1-1-Current-version-of-the-architecture"><a href="#2-1-1-Current-version-of-the-architecture" class="headerlink" title="2.1.1 Current version of the architecture"></a>2.1.1 Current version of the architecture</h4><p>We can divide the logic into three parts: JS thread, UI thread (main thread) and Shadow thread. The Shadow thread is mainly responsible for the calculation of the ShadowView update mentioned above, this part of the operation is done by the c++ layer of the yoga framework, and after the calculation is completed, the data will be handed over to the main thread to refresh the real view.</p><p>When React (js layer) needs to update the interface or call the native interface, it needs to convert the call parameters to JSON strings through the bridge, and then pass the JSON data to the native layer through the bridge, and then the native layer parses the JSON data to find the corresponding method or view to perform the operation, so it is not possible to realize the direct call between the js layer and the native layer, and meanwhile, the data cannot be shared among the three threads in data communication, so it is not possible to share the data among the three threads. At the same time, in the data communication between the three threads can not share the data, only to save a copy of their own, each maintained by the three threads in the communication can only be asynchronous call.</p><p>NativeModules need to be loaded at startup, and in the native end and js end services to maintain an object to ensure that in the js end call can correctly find the corresponding method, this part of the operation in the startup process will take up more resources, and will load a lot of unused resources.</p><p>Currently we use the 0.59.10 version of the RN framework, although the jsi architecture has been introduced, but in fact not used in the communication, communication is still realized through the bridge.</p><image src="../assets/rn-1.png" /><h4 id="2-1-2-New-version-architecture"><a href="#2-1-2-New-version-architecture" class="headerlink" title="2.1.2 New version architecture"></a>2.1.2 New version architecture</h4><p>Compared to the current version of the architecture, the new version introduces the concept of jsi and fabric, jsi implements the js layer and the native layer of the inter-call , fabric will replace the UIManager , contains the renderer and shadow thread , and after the introduction of jsi , the realization of multiple threads before the data sharing , no longer need to use the json format to pass each other the data , and to retain a copy .</p><image src="../assets/rn-2.png" /><p>The new version of the architecture and still use three threads for parallel processing , but all three threads can access the data in the js thread , NativeModules introduced TurboModules technology also , and then in the startup of all the load , but in the use of on-demand loading . At the same time the new architecture also introduces the CodeGen technology , according to the definition of types , automatically generate TurboModules native code , automatic compatibility between threads to communicate with each other .</p><p>The new version of the architecture has not yet been released, but has been iteratively used in facebook’s internal applications, and it is expected that a major update will be released in the second half of the year. Reference: <a href="https://reactnative.dev/blog/2021/08/19/h2-2021">https://reactnative.dev/blog/2021/08/19/h2-2021</a></p><h3 id="2-2-RN-startup-logic"><a href="#2-2-RN-startup-logic" class="headerlink" title="2.2 RN startup logic"></a>2.2 RN startup logic</h3><p>Having understood the basic architecture of RN, we also need to understand how the startup process of the RN architecture of the current architecture works. Its overall logic can be simplified as the following process:</p><image src="../assets/rn-3.png" /><h4 id="2-2-1-Creating-the-RCTRootView"><a href="#2-2-1-Creating-the-RCTRootView" class="headerlink" title="2.2.1 Creating the RCTRootView"></a>2.2.1 Creating the RCTRootView</h4><p>As mentioned above, RCTRootView is a native container for RN interface, and generally for unpacked RN apps, RCTRootView can be created as the root view of the app when the app is launched.</p><p>We generally use the -(instancetype)initWithBundleURL:moduleName:initialProperties:launchOptions: method to create the RCTRootView. four parameters are passed to the url of the jsbundle, the name of the application to be launched (in JS, this is done through the app name (the name of the root component registered in JS via AppRegistry.registerComponent); initialization parameters (the props of the root component will be passed as startup parameters); app startup parameters (generally don’t need to care about); RCTBridge will be created when creating RCTRootView using this method to maintain the whole RCTBridge will be created when RCTRootView is created using this method to maintain the entire life cycle of the RN application.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[[<span class="title class_">RCTRootView</span> alloc] <span class="attr">initWithBundleURL</span>:[<span class="variable constant_">NSURL</span> <span class="attr">fileURLWithPath</span>:panelPath] <span class="attr">moduleName</span>:@<span class="string">&quot;Demo&quot;</span> <span class="attr">initialProperties</span>:initialProps <span class="attr">launchOptions</span>:launchOptions];</span><br></pre></td></tr></table></figure><h4 id="2-2-2-Creating-the-RCTBridge"><a href="#2-2-2-Creating-the-RCTBridge" class="headerlink" title="2.2.2 Creating the RCTBridge"></a>2.2.2 Creating the RCTBridge</h4><p>RCTBridge is an important object to maintain the life cycle of RN, we can also extract the required parameters, required methods, etc. in RCTBridge to you. Therefore, we usually use more to create an RCTBridge first and then create RCTRootView through the RCTBridge to initialize the RN application.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">  RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];</span><br><span class="line">//或  </span><br><span class="line">  RCTBridge *bridge = [[RCTBridge alloc] initWithBundleURL:[[NSBundle mainBundle]URLForResource:@&quot;main&quot; withExtension:@&quot;jsbundle&quot;] moduleProvider:nil launchOptions:launchOptions];</span><br><span class="line">  RCTRootView *rootView = [[RCTRootView alloc] initWithBridge:bridge</span><br><span class="line">                                                   moduleName:@&quot;Demo&quot;</span><br><span class="line"></span><br><span class="line">                                            initialProperties:nil];</span><br><span class="line">- (NSURL *)sourceURLForBridge:(RCTBridge *)bridge</span><br><span class="line">&#123;</span><br><span class="line">#if DEBUG</span><br><span class="line">  return [[RCTBundleURLProvider sharedSettings] jsBundleURLForBundleRoot:@&quot;index&quot; fallbackResource:nil];</span><br><span class="line">#else</span><br><span class="line">  return [[NSBundle mainBundle] URLForResource:@&quot;main&quot; withExtension:@&quot;jsbundle&quot;];</span><br><span class="line">#endif</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>The moduleProvider can be used to configure which NativeModules the bridge can access, which can be used when unpacking application control permissions.</p><h4 id="2-2-3-RCTCxxBridge"><a href="#2-2-3-RCTCxxBridge" class="headerlink" title="2.2.3 RCTCxxBridge"></a>2.2.3 RCTCxxBridge</h4><p>When RCTBridge is initialized, it saves the bundleUrl that was set at the time of creation, and creates an instance of RCTCxxBridge called batchedBridge to initialize the RN environment.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">self.batchedBridge = [[bridgeClass alloc] initWithParentBridge:self];</span><br><span class="line">[self.batchedBridge start];</span><br><span class="line"></span><br></pre></td></tr></table></figure><h4 id="2-2-4-Loading-the-RCTBridgeModule"><a href="#2-2-4-Loading-the-RCTBridgeModule" class="headerlink" title="2.2.4 Loading the RCTBridgeModule"></a>2.2.4 Loading the RCTBridgeModule</h4><p>The batchedBridge startup first sends a notification that it will be loaded, after which it creates a js thread for thread initialization, and then registers the NativeModules.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line"> RCT_PROFILE_BEGIN_EVENT(RCTProfileTagAlways, @&quot;-[RCTCxxBridge start]&quot;, nil);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">  [[NSNotificationCenter defaultCenter] postNotificationName:RCTJavaScriptWillStartLoadingNotification</span><br><span class="line">                                                      object:_parentBridge</span><br><span class="line">                                                    userInfo:@&#123;@&quot;bridge&quot; : self&#125;];</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">  // Set up the JS thread early</span><br><span class="line">  _jsThread = [[NSThread alloc] initWithTarget:[self class] selector:@selector(runRunLoop) object:nil];</span><br><span class="line">  _jsThread.name = RCTJSThreadName;</span><br><span class="line">  _jsThread.qualityOfService = NSOperationQualityOfServiceUserInteractive;</span><br><span class="line">#if RCT_DEBUG</span><br><span class="line">  _jsThread.stackSize *= 2;</span><br><span class="line">#endif</span><br><span class="line">  [_jsThread start];</span><br><span class="line">...</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">  [self registerExtraModules];</span><br><span class="line">  // Initialize all native modules that cannot be loaded lazily</span><br><span class="line">  (void)[self _initializeModules:RCTGetModuleClasses() withDispatchGroup:prepareBridge lazilyDiscovered:NO];</span><br><span class="line">  [self registerExtraLazyModules];</span><br><span class="line"></span><br><span class="line">...</span><br><span class="line">  dispatch_group_enter(prepareBridge);</span><br><span class="line">  [self ensureOnJavaScriptThread:^&#123;</span><br><span class="line">    [weakSelf _initializeBridge:executorFactory];</span><br><span class="line">    dispatch_group_leave(prepareBridge);</span><br><span class="line">  &#125;];</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h4 id="2-2-5-Executing-JS-code"><a href="#2-2-5-Executing-JS-code" class="headerlink" title="2.2.5 Executing JS code"></a>2.2.5 Executing JS code</h4><p>After NativeModules is loaded, it starts reading the jsbundle code into memory, and executes the js code when it’s done.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line">  dispatch_group_enter(prepareBridge);</span><br><span class="line">  __block NSData *sourceCode;</span><br><span class="line">  [self</span><br><span class="line">      loadSource:^(NSError *error, RCTSource *source) &#123;</span><br><span class="line">        if (error) &#123;</span><br><span class="line">          [weakSelf handleError:error];</span><br><span class="line">        &#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">        sourceCode = source.data;</span><br><span class="line">        dispatch_group_leave(prepareBridge);</span><br><span class="line">      &#125;</span><br><span class="line">      onProgress:^(RCTLoadingProgress *progressData) &#123;</span><br><span class="line">#if (RCT_DEV | RCT_ENABLE_LOADING_VIEW) &amp;&amp; __has_include(&lt;React/RCTDevLoadingViewProtocol.h&gt;)</span><br><span class="line">        id&lt;RCTDevLoadingViewProtocol&gt; loadingView = [weakSelf moduleForName:@&quot;DevLoadingView&quot;</span><br><span class="line">                                                      lazilyLoadIfNecessary:YES];</span><br><span class="line">        [loadingView updateProgress:progressData];</span><br><span class="line">#endif</span><br><span class="line">      &#125;];</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">  // Wait for both the modules and source code to have finished loading</span><br><span class="line">  dispatch_group_notify(prepareBridge, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^&#123;</span><br><span class="line">    RCTCxxBridge *strongSelf = weakSelf;</span><br><span class="line">    if (sourceCode &amp;&amp; strongSelf.loading) &#123;</span><br><span class="line">      [strongSelf executeSourceCode:sourceCode sync:NO];</span><br><span class="line">    &#125;</span><br><span class="line">  &#125;);</span><br></pre></td></tr></table></figure><p>At this point the js code has been executed into memory and the root component has been registered to send the load completion notification, at this point RCTRootView can create RCTRootContentView.</p><h4 id="2-2-6-runApplication"><a href="#2-2-6-runApplication" class="headerlink" title="2.2.6 runApplication"></a>2.2.6 runApplication</h4><p>Immediately after the RCTRootView is created the RCTRootContentView calls the AppRegistry.runApplication method to start loading the RN logic for rendering. At this point the RN logic process begins.</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">  _contentView = [[RCTRootContentView alloc] initWithFrame:self.bounds</span><br><span class="line">                                                    bridge:bridge</span><br><span class="line">                                                  reactTag:self.reactTag</span><br><span class="line">                                            sizeFlexiblity:_sizeFlexibility];</span><br><span class="line">  [self runApplication:bridge];</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">- (void)runApplication:(RCTBridge *)bridge</span><br><span class="line">&#123;</span><br><span class="line">  NSString *moduleName = _moduleName ?: @&quot;&quot;;</span><br><span class="line">  NSDictionary *appParameters = @&#123;</span><br><span class="line">    @&quot;rootTag&quot; : _contentView.reactTag,</span><br><span class="line">    @&quot;initialProps&quot; : _appProperties ?: @&#123;&#125;,</span><br><span class="line">  &#125;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">  RCTLogInfo(@&quot;Running application %@ (%@)&quot;, moduleName, appParameters);</span><br><span class="line">  [bridge enqueueJSCall:@&quot;AppRegistry&quot; method:@&quot;runApplication&quot; args:@[ moduleName, appParameters ] completion:NULL];</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h4 id="2-2-7-Interface-rendering-on-screen"><a href="#2-2-7-Interface-rendering-on-screen" class="headerlink" title="2.2.7 Interface rendering on screen"></a>2.2.7 Interface rendering on screen</h4><p>After RCTRootView calls runApplicaton, the method call will send the business startup parameters in the form of messages to messageQueue(batchedBridge) on the js side through jscore (above mentioned messageQueue is a form of passively receiving data and actively refreshing it at regular intervals. By default, messageQueue will perform a flush operation every 5ms, and when it finds a new message during the flush, it will perform the logic according to the parameters of the message.) When runApplication is executed, it will use the components registered through AppRegistry.registerComponent to load the interface. At this time, the UIManager will create a dom configuration based on the components and levels in the dom tree, and pass it to the native RCTUIManager in the form of a json, the RCTUIManager will then create or refresh the Shadow based on the configuration, and then calculate the rendering parameters (width, height, position, etc.) of the components through the yoga, and pass the calculated configuration to the corresponding native view. Pass the calculated configuration to the corresponding native view, such as RCTView, RCTImage and other components, these components will be rendered to RCTRootContentView by layer to complete the rendering of the first screen.</p><image src="../assets/rn-4.png" /><h3 id="2-3-JSX-and-Native-View-Mapping-Logic"><a href="#2-3-JSX-and-Native-View-Mapping-Logic" class="headerlink" title="2.3 JSX and Native View Mapping Logic"></a>2.3 JSX and Native View Mapping Logic</h3><p>It is mentioned above that the js-side uimanager needs to pass the configuration of each component to the native-side UIManager, which then handles the subsequent rendering process, so how are the configurations unified between the two ends of the UIManager?</p><p>In section 1.6, 1.7 we mentioned that RCTUIManager follows the RCTBridgeModule protocol, which allows the registration of modules and module methods.2.2.4 also mentioned that in the initialization process of RCTBridge, it will register the exported modules and methods, and after the registration is completed, the js side can get the references to the methods of these modules.</p><p>In the module registration, in fact, will be in the native side and js side at the same time to generate a configuration file, remoteModuleConfig, in the js side can be through the</p><p>In the js side, you can view this variable through __fbBatchedBridgeConfig.</p><p>The js side in the call method, according to this configuration file, will be located in the native side of the corresponding component for the corresponding event or configuration of the pass.</p><h3 id="2-4-JS-and-Native-Communication-Logic"><a href="#2-4-JS-and-Native-Communication-Logic" class="headerlink" title="2.4 JS and Native Communication Logic"></a>2.4 JS and Native Communication Logic</h3><p>JS can trigger the native method by exporting the method in the module or component. The trigger is used here because the operation is asynchronous, so you can’t get the return value directly from the native side, and you can only handle the return value through a callback or Promise. This also relies on the configuration file exported during module registration as mentioned in 2.3.</p><image src="../assets/rn-5.png" /><h3 id="2-4-2-Event-Notification"><a href="#2-4-2-Event-Notification" class="headerlink" title="2.4.2 Event Notification"></a>2.4.2 Event Notification</h3><p>The event passing mechanism is relatively simple, and both ends support sending notifications and registering listeners, we can register and send notifications through NativeAppEventEmitter, DeviceEventEmitter, NativeEventEmitter.</p>]]></content>
    
    
    <summary type="html">React Native Knowledge Points Explained</summary>
    
    
    
    <category term="Front end" scheme="https://www.nablepart.com/categories/Front-end/"/>
    
    <category term="RN" scheme="https://www.nablepart.com/categories/Front-end/RN/"/>
    
    
    <category term="RN" scheme="https://www.nablepart.com/tags/RN/"/>
    
    <category term="React Native" scheme="https://www.nablepart.com/tags/React-Native/"/>
    
    <category term="React" scheme="https://www.nablepart.com/tags/React/"/>
    
    <category term="JavascriptCore" scheme="https://www.nablepart.com/tags/JavascriptCore/"/>
    
    <category term="JSI" scheme="https://www.nablepart.com/tags/JSI/"/>
    
  </entry>
  
  <entry>
    <title>The Impact of the Ban on TikTok Shop in Indonesia&#39;s Live E-commerce Landscape</title>
    <link href="https://www.nablepart.com/675efa9b5859/"/>
    <id>https://www.nablepart.com/675efa9b5859/</id>
    <published>2023-10-27T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231027170420.png"></p><p>Just a few days before the Chinese Mid-Autumn Festival, on September 27th, the Indonesian Ministry of Trade issued Regulation No. 31 of 2023, which prohibited social media platforms from serving as sales platforms for goods. This sudden ban had a significant impact on Indonesia’s largest social commerce platform, TikTok Shop.  </p><p>Within seven days of the regulation’s announcement, TikTok Shop Indonesia announced its closure on October 4th. This closure dealt a severe blow to TikTok’s presence in the Southeast Asian market, especially in Indonesia, where it had 125 million monthly active users. TikTok had considered Indonesia as one of its crucial markets. With the introduction of TikTok Shop in Indonesia two years ago, many Southeast Asian businesses had focused on expanding their presence in the Indonesian market, recruiting local influencers, and conducting live commerce activities.</p><p>However, with the sudden ban, TikTok Shop in Indonesia faced a significant setback. The platform’s initial investments and efforts were in vain, and over six million sellers on the platform were left in a state of uncertainty and confusion.</p><h2 id="The-Unexpected-Storm"><a href="#The-Unexpected-Storm" class="headerlink" title="The Unexpected Storm"></a>The Unexpected Storm</h2><p>The closure of TikTok Shop caused a frenzy in the Indonesian e-commerce landscape. TikTok Shop’s live streaming rooms were filled with chaos and urgency as sellers hurriedly showcased discounted products to their screens. Many live streamers portrayed scenes of despair as they faced the imminent closure of their shops. Despite the pressure, even small live streaming rooms garnered up to 70,000-80,000 likes per session.</p><p>The sudden closure of TikTok Shop left the platform’s 6 million local sellers and approximately 7 million live commerce practitioners and short-video creators in a state of uncertainty about their future. The impact of the ban was limited to social media platforms used for live commerce, while TikTok’s short video content business and other Indonesian mainstream e-commerce platforms such as Shopee, Lazada, and Tokopedia remained unaffected.</p><p>During this period of uncertainty, Indonesian sellers on TikTok Shop had to consider their next steps and how to minimize their losses and risks. They also wondered how TikTok’s official platform would respond to the ban.</p><p>The ban also affected the logistics and shipping industry in Southeast Asia, as well as independent e-commerce platforms and other e-commerce platforms undergoing transformations. They sought to seize this opportunity to diversify their services and generate additional revenue.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231027170302.png"></p><h2 id="The-Indonesian-E-commerce-Landscape"><a href="#The-Indonesian-E-commerce-Landscape" class="headerlink" title="The Indonesian E-commerce Landscape"></a>The Indonesian E-commerce Landscape</h2><p>The ban on social media platforms as sales platforms for goods in Indonesia is not entirely surprising, as the Indonesian government has been tightening e-commerce policies, particularly regarding import goods and cross-border e-commerce, for the past few years.</p><p>In 2019, as e-commerce started booming in Indonesia, cross-border e-commerce practitioners strategically established bonded warehouses in locations such as Batam Island. Initially, products with a value above $100 were subject to taxation, but the threshold was later reduced to $75 to restrict imports. In 2020, the Indonesian government further adjusted the import taxation policy for e-commerce, reducing the tax exemption threshold from $75 to $3 per day to protect local sellers.  </p><p>In 2021, Indonesia imposed restrictions on certain categories of cross-border e-commerce, including Muslim clothing, veils, and prayer garments, as well as various textile and apparel products. In August of the same year, a new policy was announced, prohibiting the sale of imported goods valued below $100 (1.5 million Indonesian rupiahs) on online platforms. Import goods had to be first imported into the Indonesian market before being sold, and e-commerce platforms were not allowed to sell their own branded products.</p><p>The ban on TikTok Shop is an extension of these previous restrictions on cross-border e-commerce and efforts to protect the local market. Indonesian President Joko Widodo expressed concerns about TikTok Shop’s growing influence and its impact on micro, small, and medium-sized enterprises (MSMEs). He highlighted the exceptionally low prices at which imported products were being sold on social media platforms, lower than the production costs of local products, which affected local businesses. </p><p>Zulkifli Hasan, the Indonesian Minister of Trade, emphasized the need to separate e-commerce from social media platforms, stating that they should not be associated with each other.</p><h2 id="The-Nervous-Southeast-Asian-E-commerce-Market"><a href="#The-Nervous-Southeast-Asian-E-commerce-Market" class="headerlink" title="The Nervous Southeast Asian E-commerce Market"></a>The Nervous Southeast Asian E-commerce Market</h2><p>The ban on TikTok Shop in Indonesia had a ripple effect throughout Southeast Asia, triggering a storm of reactions towards TikTok and other e-commerce platforms.</p><p>In Vietnam, the government had been conducting a comprehensive inspection of TikTok since May 22nd, focusing not only on its e-commerce operations but also on the operations and services of the TikTok app itself. The inspection revealed various violations, including providing cross-border trading services, social networking services, and cross-border advertising services to Vietnam without following regulations. TikTok was given 30 days to rectify the violations and provide written notification to the Ministry of Information and Communications in Vietnam.</p><p>While Vietnam’s inspection primarily targeted TikTok’s overall operations, Malaysia also expressed concerns and summoned TikTok for an explanation. The Malaysian Ministry of Communications and Multimedia plans to investigate the ban on TikTok Shop in Indonesia. </p><p>While TikTok faced challenges in Southeast Asia, e-commerce platforms like Shopee and Lazada, which also had Chinese e-commerce origins, saw this as an opportunity. Lazada’s CEO in Indonesia, James Chang, announced plans to attract sellers affected by the latest e-commerce regulations. Lazada waived fees for sellers who could no longer sell on content-driven e-commerce platforms. Additionally, Lazada offered incentives such as three months of zero commission, two months of free shipping, and vouchers worth 300,000 Indonesian rupiahs.</p><p>Shopee became the primary destination for former TikTok Shop sellers. Many sellers who had previously tried TikTok Shop quickly registered on Shopee and Lazada, and some even explored Tokopedia, a local Indonesian e-commerce platform, to diversify their business channels. </p><p>While TikTok faced setbacks in Southeast Asia, its interest-driven e-commerce model continued to hold appeal. The migration of sellers to various e-commerce platforms created new opportunities for value creation and growth in the region.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>The ban on TikTok Shop in Indonesia has dramatically impacted the live e-commerce landscape in the country. TikTok’s closure left millions of local sellers and live commerce practitioners uncertain about their future. This ban is part of a broader trend in Indonesia to protect local businesses and regulate e-commerce. However, the ban has also created opportunities for other e-commerce platforms like Shopee and Lazada to attract affected sellers and expand their market share.  </p><p>As the Southeast Asian e-commerce market continues to evolve, it is crucial for sellers to diversify their platforms and adapt to changing regulations. The closure of TikTok Shop serves as a reminder to businesses to stay agile and explore multiple channels to minimize risks and maximize opportunities.</p><p>While TikTok’s future in the Indonesian e-commerce market remains uncertain, its influence and interest-driven e-commerce model have left a lasting impact on the region. The e-commerce landscape in Southeast Asia will continue to evolve, presenting new challenges and opportunities for sellers, platforms, and regulators alike.</p><p>Disclaimer: This article is based on information from various sources and does not reflect the views of the platform. The opinions expressed in this article are solely those of the author.</p>]]></content>
    
    
    <summary type="html">This is an article analyzing the recent news story of Indonesia&#39;s ban on merchandising on social media such as TikTok and its implications.</summary>
    
    
    
    <category term="e-commerce" scheme="https://www.nablepart.com/categories/e-commerce/"/>
    
    
    <category term="TikTok" scheme="https://www.nablepart.com/tags/TikTok/"/>
    
    <category term="TikTok Shop" scheme="https://www.nablepart.com/tags/TikTok-Shop/"/>
    
  </entry>
  
  <entry>
    <title>Google&#39;s 25th Anniversary-From AI Wavemaker to Catch Up</title>
    <link href="https://www.nablepart.com/08af70c7e9dd/"/>
    <id>https://www.nablepart.com/08af70c7e9dd/</id>
    <published>2023-10-25T13:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Google, the tech giant that has revolutionized the way we search for information, recently celebrated its 25th anniversary. Over the years, Google has constantly evolved and adapted to the changing landscape of technology. One of the significant transformations for Google has been its foray into artificial intelligence (AI). In this article, we will explore how Google has transitioned from being an AI wavemaker to a determined catch-up player, and the impact it has had on the company and the world at large.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231026102852.png"></p><h2 id="Google-1-0-The-Early-Years"><a href="#Google-1-0-The-Early-Years" class="headerlink" title="Google 1.0: The Early Years"></a>Google 1.0: The Early Years</h2><p>Google’s journey began in 1998 when Larry Page and Sergey Brin founded the company as a research project at Stanford University. Their mission was to organize the world’s information and make it universally accessible and useful. The early years of Google were characterized by its search engine, which quickly gained popularity due to its accurate and efficient search results. Google’s focus on user experience and relevance set it apart from its competitors.</p><h2 id="Google-2-0-From-“X”-to-“Alphabet”"><a href="#Google-2-0-From-“X”-to-“Alphabet”" class="headerlink" title="Google 2.0: From “X” to “Alphabet”"></a>Google 2.0: From “X” to “Alphabet”</h2><p>As Google continued to grow, it expanded its product portfolio and ventured into new territories. In 2015, the company underwent a major restructuring and formed a parent company called Alphabet. This move allowed Google to allocate resources more effectively and pursue ambitious projects beyond its core search business. Alphabet became a conglomerate of companies, with Google being one of its subsidiaries.</p><h2 id="The-Rise-of-Artificial-Intelligence"><a href="#The-Rise-of-Artificial-Intelligence" class="headerlink" title="The Rise of Artificial Intelligence"></a>The Rise of Artificial Intelligence</h2><p>With the advent of AI, Google recognized the immense potential it held for transforming various industries. The company invested heavily in AI research and development, acquiring several AI startups and hiring top talent in the field. Google’s AI initiatives were driven by its commitment to improving user experiences and solving complex problems.</p><h2 id="Google’s-AI-Wavemaker-Era"><a href="#Google’s-AI-Wavemaker-Era" class="headerlink" title="Google’s AI Wavemaker Era"></a>Google’s AI Wavemaker Era</h2><p>During the early stages of AI development, Google positioned itself as a wavemaker, pushing the boundaries of what AI could achieve. It launched several groundbreaking AI-powered products and services that revolutionized the way we interact with technology. One such product was Google Assistant, a virtual assistant that could understand and respond to natural language queries. Google Assistant quickly became a household name and set the benchmark for AI-powered voice assistants.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231026102711.png"></p><p>Google also made significant advancements in computer vision with the development of Google Lens. This AI-powered visual search tool enables users to search for information by simply pointing their camera at objects, landmarks, or text. Google Lens has been integrated into various Google products, including Google Photos and Google Search, enhancing the overall user experience.</p><h2 id="Google’s-Catch-Up-Efforts-in-AI"><a href="#Google’s-Catch-Up-Efforts-in-AI" class="headerlink" title="Google’s Catch-Up Efforts in AI"></a>Google’s Catch-Up Efforts in AI</h2><p>While Google was initially at the forefront of AI innovation, it faced stiff competition from other tech giants, particularly in the field of voice assistants. Amazon’s Alexa and Apple’s Siri gained significant market share, leaving Google playing catch-up. Recognizing the need to improve its voice assistant capabilities, Google focused on enhancing the performance and features of Google Assistant.</p><p>In recent years, Google has made significant strides in natural language processing and understanding, enabling Google Assistant to provide more accurate and contextually relevant responses. The introduction of Duplex, an AI system capable of making phone calls on behalf of users, showcased Google’s commitment to pushing the boundaries of AI technology.<br>AI-Powered Products and Services</p><p>Apart from voice assistants, Google has incorporated AI into various other products and services. Google Maps, for example, leverages AI algorithms to provide real-time traffic updates and suggest the most efficient routes. AI is also used in Google’s email service, Gmail, to detect and filter spam messages, enhancing user security and productivity.<br>In the field of healthcare, Google’s AI algorithms have shown promise in diagnosing diseases and predicting patient outcomes. Through its DeepMind subsidiary, Google has developed AI models that can analyze medical images and detect abnormalities with high accuracy. These advancements have the potential to revolutionize healthcare delivery and improve patient outcomes.</p><h2 id="Ethical-Considerations-and-Challenges"><a href="#Ethical-Considerations-and-Challenges" class="headerlink" title="Ethical Considerations and Challenges"></a>Ethical Considerations and Challenges</h2><p>As Google continues to push the boundaries of AI, ethical considerations and challenges arise. The company is committed to ensuring that AI is developed and deployed responsibly, with a focus on transparency and accountability. Google has established ethical guidelines for AI development and usage, aiming to mitigate biases and ensure fairness in AI algorithms.<br>Privacy is another significant concern when it comes to AI. Google has faced criticism for its data collection practices and the potential misuse of user information. The company has taken steps to address these concerns by giving users greater control over their data and implementing strict privacy policies.</p><h2 id="The-Future-of-Google-and-AI"><a href="#The-Future-of-Google-and-AI" class="headerlink" title="The Future of Google and AI"></a>The Future of Google and AI</h2><p>As Google celebrates its 25th anniversary, the future looks promising for the company’s AI endeavors. With ongoing advancements in machine learning and deep learning, Google is poised to continue pushing the boundaries of what AI can achieve. The integration of AI into various products and services will further enhance user experiences and drive innovation across industries.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231026103159.png"></p><p>Google’s commitment to AI research and development, coupled with its vast resources, positions the company as a major player in shaping the future of AI. As technology evolves, Google will likely continue to invest in AI-powered solutions that make a positive impact on society.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Google’s 25th anniversary marks a significant milestone in the company’s journey. From its humble beginnings as a search engine to its current position as a leader in AI technology, Google has consistently adapted and evolved. The company’s transition from being an AI wavemaker to a determined catch-up player showcases its commitment to innovation and its determination to stay at the forefront of technological advancements. As Google looks towards the future, it will continue to leverage AI to enhance user experiences and shape the world we live in.</p>]]></content>
    
    
    <summary type="html">Google&#39;s transformation in the field of artificial intelligence is a striking development that has not only changed the way we search for information, but has also propelled the entire tech industry. Google initially rose to prominence as a search engine company, but over time it came to realize the immense potential of artificial intelligence and began to shift its focus to this field.</summary>
    
    
    
    <category term="行业分析" scheme="https://www.nablepart.com/categories/%E8%A1%8C%E4%B8%9A%E5%88%86%E6%9E%90/"/>
    
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="Google" scheme="https://www.nablepart.com/tags/Google/"/>
    
  </entry>
  
  <entry>
    <title>使用 GitHub 操作自动部署 WordPress 插件和主题</title>
    <link href="https://www.nablepart.com/fded0cbe67f5/"/>
    <id>https://www.nablepart.com/fded0cbe67f5/</id>
    <published>2023-10-25T11:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>在 WordPress 开发和项目协作中,使用 GitHub 进行集成可以在许多方面提供帮助。其中之一是通过自动部署的方式进行部署,而不是通过 FTP 进行手动上传。这种集成可以极大地节省时间,特别是在使用自定义主题和插件时更为重要。本文将介绍如何设置 WordPress GitHub 集成,以便管理 WordPress 主题和插件的部署。</p><p>在本文中,我们将不涉及以下内容:</p><ul><li><p>WordPress 核心文件:我们不应该编辑 WordPress 核心文件,因此将它们包含在我们的代码库中没有意义。</p></li><li><p>数据库:尝试对 WordPress 数据库进行版本控制会引发一系列问题,因此我们不会涉及此内容。 </p></li><li><p>WordPress 插件和主题的开发:我们假设您已经完成了插件和主题的开发工作,并准备进行部署。</p></li></ul><p>接下来,让我们开始介绍如何使用样例工作流来同步我们的 WordPress 主题和插件!</p><h2 id="部署-WordPress-主题"><a href="#部署-WordPress-主题" class="headerlink" title="部署 WordPress 主题"></a>部署 WordPress 主题</h2><p>以下是用于将您编写的主题文件与服务器上的 <code>/wp-content/themes</code> 文件夹同步的 .yml 代码。对于像 Digital Ocean 这样的 LAMP 服务器,这是一个示例工作流程:</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">name:</span> <span class="string">Deploy</span> <span class="string">Theme</span></span><br><span class="line"></span><br><span class="line"><span class="attr">on:</span></span><br><span class="line">  <span class="attr">push:</span> </span><br><span class="line">    <span class="attr">branches:</span> [<span class="string">master</span>]</span><br><span class="line"></span><br><span class="line"><span class="attr">env:</span></span><br><span class="line">  <span class="attr">SSH_USER:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_USER</span> <span class="string">&#125;&#125;</span></span><br><span class="line">  <span class="attr">SSH_HOST:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_HOST</span> <span class="string">&#125;&#125;</span></span><br><span class="line">  </span><br><span class="line"><span class="attr">jobs:</span></span><br><span class="line">  <span class="attr">deploy:</span></span><br><span class="line">    <span class="attr">name:</span> <span class="string">Deploy</span> <span class="string">WordPress</span> <span class="string">Theme</span> <span class="string">on</span> <span class="string">Digital</span> <span class="string">Ocean</span>  </span><br><span class="line">    <span class="attr">runs-on:</span> <span class="string">ubuntu-latest</span></span><br><span class="line"></span><br><span class="line">    <span class="attr">steps:</span></span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Checkout</span></span><br><span class="line">        <span class="attr">uses:</span> <span class="string">actions/checkout@v2</span></span><br><span class="line"></span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Set</span> <span class="string">SSH</span> <span class="string">Connection</span></span><br><span class="line">        <span class="attr">run:</span> <span class="string">|</span></span><br><span class="line"><span class="string">          mkdir -p ~/.ssh/</span></span><br><span class="line"><span class="string">          echo &quot;$SSH_KEY&quot; &gt; ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">          chmod 600 ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">          cat &gt;&gt;~/.ssh/config &lt;&lt;END</span></span><br><span class="line"><span class="string">          Host digitalocean</span></span><br><span class="line"><span class="string">            HostName $SSH_HOST</span></span><br><span class="line"><span class="string">            User $SSH_USER</span></span><br><span class="line"><span class="string">            IdentityFile ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">            StrictHostKeyChecking no</span></span><br><span class="line"><span class="string">          END</span></span><br><span class="line"><span class="string"></span>        <span class="attr">env:</span></span><br><span class="line">          <span class="attr">SSH_KEY:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.DEPLOY_KEY</span> <span class="string">&#125;&#125;</span></span><br><span class="line">          </span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Sync</span> <span class="string">theme</span> <span class="string">files</span></span><br><span class="line">        <span class="attr">run:</span> <span class="string">|</span></span><br><span class="line"><span class="string">          rsync --delete -avO \</span></span><br><span class="line"><span class="string">            --exclude /deploy_key \</span></span><br><span class="line"><span class="string">            --exclude /.git/ \</span></span><br><span class="line"><span class="string">            --exclude /.github/ \</span></span><br><span class="line"><span class="string">            ./ $&#123;&#123; env.SSH_USER &#125;&#125;@$&#123;&#123; env.SSH_HOST &#125;&#125;:$&#123;&#123; env.DEST &#125;&#125;</span></span><br><span class="line"><span class="string"></span>        <span class="attr">env:</span></span><br><span class="line">          <span class="attr">SSH_HOST:</span> <span class="string">digitalocean</span></span><br><span class="line">          <span class="attr">DEST:</span> <span class="string">&quot;/var/www/your-domain/wp-content/themes/theme-folder&quot;</span></span><br></pre></td></tr></table></figure><p><strong>工作流名称</strong></p><p>首行中的 <code>name</code> 字段指定了工作流的名称,这个名称可以自定义为描述该工作流的目的。</p><p><strong>触发器设置</strong></p><p>接下来,我们定义了触发器,即当我们将主题的提交推送到存储库的 <code>master</code> 分支时,工作流将被触发。您可以根据需要自定义触发器设置,详细信息请参阅 GitHub 文档。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">on:</span></span><br><span class="line">  <span class="attr">push:</span></span><br><span class="line">    <span class="attr">branches:</span> [<span class="string">master</span>]</span><br></pre></td></tr></table></figure><p><strong>密钥设置</strong></p><p>我们创建了用于自定义密钥的变量。这些变量可以存储在 GitHub 的 Secrets 中,供工作流程使用。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">env:</span></span><br><span class="line">  <span class="attr">SSH_USER:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_USER</span> <span class="string">&#125;&#125;</span></span><br><span class="line">  <span class="attr">SSH_HOST:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_HOST</span> <span class="string">&#125;&#125;</span></span><br></pre></td></tr></table></figure><p><strong>工作</strong> </p><p>在 GitHub Actions 的工作流文件中,jobs 部分定义了一个或多个要在工作流触发时执行的作业。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">jobs:</span></span><br><span class="line">  <span class="attr">deploy:</span></span><br><span class="line">    <span class="attr">name:</span> <span class="string">Deploy</span> <span class="string">WordPress</span> <span class="string">Theme</span> <span class="string">on</span> <span class="string">Digital</span> <span class="string">Ocean</span></span><br><span class="line">    <span class="attr">runs-on:</span> <span class="string">ubuntu-latest</span></span><br></pre></td></tr></table></figure><p>在上述示例中:</p><ul><li><code>deploy</code> 是作业的名称,可以自定义以描述该作业的目的。</li><li><code>name</code> 是一个可选字段,用于在查看工作流时使其更易理解。 </li><li><code>runs-on</code> 指定作业将在其上执行的虚拟环境或 Runner 的类型。在此示例中,使用了 <code>ubuntu-latest</code> Runner,表示作业将在最新可用的 Ubuntu 操作系统版本上运行。</li></ul><p><strong>步骤</strong></p><p>GitHub Actions 的工作流文件中,steps 部分定义了作业中要执行的一系列单独任务。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">steps:</span></span><br><span class="line">  <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Checkout</span> </span><br><span class="line">    <span class="attr">uses:</span> <span class="string">actions/checkout@v2</span></span><br></pre></td></tr></table></figure><p>在上述示例中:</p><ul><li><code>steps</code> 包含一个按指定顺序执行的操作列表。</li><li><code>name</code> 是一个可选字段,用于在查看工作流时使其更易理解。</li><li><code>uses</code> 指定要执行的操作。在此示例中,使用 <code>actions/checkout@v2</code> 操作来检出源代码存储库。</li></ul><p><strong>设置 SSH 连接</strong></p><p>这部分是工作流中负责设置 SSH 连接以实现对远程主机的安全访问的部分。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Set</span> <span class="string">SSH</span> <span class="string">Connection</span></span><br><span class="line">  <span class="attr">run:</span> <span class="string">|</span></span><br><span class="line"><span class="string">    mkdir -p ~/.ssh/</span></span><br><span class="line"><span class="string">    echo &quot;$SSH_KEY&quot; &gt; ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">    chmod 600 ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">    cat &gt;&gt;~/.ssh/config &lt;&lt;END</span></span><br><span class="line"><span class="string">    Host digitalocean</span></span><br><span class="line"><span class="string">      HostName $SSH_HOST</span></span><br><span class="line"><span class="string">      User $SSH_USER</span></span><br><span class="line"><span class="string">      IdentityFile ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">      StrictHostKeyChecking no</span></span><br><span class="line"><span class="string">    END</span></span><br><span class="line"><span class="string"></span>  <span class="attr">env:</span></span><br><span class="line">    <span class="attr">SSH_KEY:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.DEPLOY_KEY</span> <span class="string">&#125;&#125;</span></span><br></pre></td></tr></table></figure><p>在上述示例中:</p><ul><li><code>name</code> 是一个可选字段,用于在前面的步骤中更好地理解该步骤的作用。</li><li><code>run</code> 定义了要执行的步骤或 shell 命令。在此示例中:<ul><li>如果 <code>~/.ssh/</code> 目录不存在,则创建该目录。</li><li>将存储在 <code>SSH_KEY</code> 环境变量中的 SSH 密钥写入 <code>~/.ssh/deploy.key</code> 文件。</li><li>设置 <code>~/.ssh/deploy.key</code> 的权限,以确保只有用户可以访问该文件。 </li><li>向 <code>~/.ssh/config</code> 文件追加 SSH 配置,包括主机名、用户名和 SSH 密钥的路径。此外,它禁用了对 “digitalocean” 主机的严格主机密钥检查。</li></ul></li><li><code>env</code> 部分指定了工作流中使用的环境变量,在此示例中,它定义了 <code>SSH_KEY</code> 变量,并从 GitHub 的 Secrets 中获取其值。</li></ul><p><strong>同步主题文件</strong></p><p>这个部分负责使用 rsync 命令将主题文件同步到远程服务器。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Sync</span> <span class="string">theme</span> <span class="string">files</span></span><br><span class="line">  <span class="attr">run:</span> <span class="string">|  </span></span><br><span class="line"><span class="string">    rsync --delete -avO \</span></span><br><span class="line"><span class="string">      --exclude /deploy_key \</span></span><br><span class="line"><span class="string">      --exclude /.git/ \</span></span><br><span class="line"><span class="string">      --exclude /.github/ \</span></span><br><span class="line"><span class="string">      ./ $&#123;&#123; env.SSH_USER &#125;&#125;@$&#123;&#123; env.SSH_HOST &#125;&#125;:$&#123;&#123; env.DEST &#125;&#125;</span></span><br><span class="line"><span class="string"></span>  <span class="attr">env:</span></span><br><span class="line">    <span class="attr">SSH_HOST:</span> <span class="string">digitalocean</span></span><br><span class="line">    <span class="attr">DEST:</span> <span class="string">&quot;/var/www/your-domain/wp-content/themes/theme-folder&quot;</span></span><br></pre></td></tr></table></figure><p>rsync 命令执行以下任务:</p><ul><li><code>--delete</code>:确保在远程服务器上也删除本地已删除的文件。</li><li><code>-avO</code>:为归档模式、详细输出和优化设置 <code>rsync</code> 选项。</li><li><code>--exclude</code>:指定要从同步中排除的文件或目录。在此示例中,它排除了 <code>/deploy_key</code>、<code>/.git/</code> 和 <code>/.github/</code> 文件夹,但您可以根据项目的需要进行自定义设置。 </li><li><code>./</code>:指定要从中同步的源目录,即工作流的当前目录。</li><li><code>$&#123;&#123; env.SSH_USER &#125;&#125;@$&#123;&#123; env.SSH_HOST &#125;&#125;:$&#123;&#123; env.DEST &#125;&#125;</code>:指定同步的目标。它使用环境变量来设置 SSH 用户名、主机和目标路径。</li></ul><p><strong>变量定义</strong></p><p>这一部分包含了工作流中使用的其他环境变量。</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">env:</span></span><br><span class="line">  <span class="attr">SSH_HOST:</span> <span class="string">digitalocean</span></span><br><span class="line">  <span class="attr">DEST:</span> <span class="string">&quot;/var/www/your-domain/wp-content/themes/theme-folder&quot;</span></span><br></pre></td></tr></table></figure><h2 id="部署-WordPress-插件"><a href="#部署-WordPress-插件" class="headerlink" title="部署 WordPress 插件"></a>部署 WordPress 插件</h2><p>要部署插件,您可以按照上述示例中的相同思路进行操作,只需将部署路径更改为 <code>/var/www/your-domain/wp-content/plugins/plugin-folder</code>。以下是一个示例的工作流程:</p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">name:</span> <span class="string">Deploy</span> <span class="string">Plugin</span></span><br><span class="line"></span><br><span class="line"><span class="attr">on:</span> </span><br><span class="line">  <span class="attr">push:</span></span><br><span class="line">    <span class="attr">branches:</span> [<span class="string">master</span>]</span><br><span class="line"></span><br><span class="line"><span class="attr">env:</span></span><br><span class="line">  <span class="attr">SSH_USER:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_USER</span> <span class="string">&#125;&#125;</span></span><br><span class="line">  <span class="attr">SSH_HOST:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.SSH_HOST</span> <span class="string">&#125;&#125;</span></span><br><span class="line">  </span><br><span class="line"><span class="attr">jobs:</span></span><br><span class="line">  <span class="attr">deploy:</span>  </span><br><span class="line">    <span class="attr">name:</span> <span class="string">Deploy</span> <span class="string">WordPress</span> <span class="string">Plugin</span> <span class="string">on</span> <span class="string">Digital</span> <span class="string">Ocean</span></span><br><span class="line">    <span class="attr">runs-on:</span> <span class="string">ubuntu-20.04</span></span><br><span class="line"></span><br><span class="line">    <span class="attr">steps:</span></span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Checkout</span></span><br><span class="line">        <span class="attr">uses:</span> <span class="string">actions/checkout@v2</span></span><br><span class="line"></span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Set</span> <span class="string">SSH</span> <span class="string">Connection</span>  </span><br><span class="line">        <span class="attr">run:</span> <span class="string">|</span></span><br><span class="line"><span class="string">          mkdir -p ~/.ssh/</span></span><br><span class="line"><span class="string">          echo &quot;$SSH_KEY&quot; &gt; ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">          chmod 600 ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">          cat &gt;&gt;~/.ssh/config &lt;&lt;END</span></span><br><span class="line"><span class="string">          Host digitalocean</span></span><br><span class="line"><span class="string">            HostName $SSH_HOST</span></span><br><span class="line"><span class="string">            User $SSH_USER</span></span><br><span class="line"><span class="string">            IdentityFile ~/.ssh/deploy.key</span></span><br><span class="line"><span class="string">            StrictHostKeyChecking no</span></span><br><span class="line"><span class="string">          END</span></span><br><span class="line"><span class="string"></span>        <span class="attr">env:</span></span><br><span class="line">          <span class="attr">SSH_KEY:</span> <span class="string">$&#123;&#123;</span> <span class="string">secrets.DEPLOY_KEY</span> <span class="string">&#125;&#125;</span></span><br><span class="line">          </span><br><span class="line">      <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Sync</span> <span class="string">plugin</span> <span class="string">files</span></span><br><span class="line">        <span class="attr">run:</span> <span class="string">|</span></span><br><span class="line"><span class="string">          rsync --delete -avO \</span></span><br><span class="line"><span class="string">            --exclude /deploy_key \</span></span><br><span class="line"><span class="string">            --exclude /.git/ \</span></span><br><span class="line"><span class="string">            --exclude /.github/ \</span></span><br><span class="line"><span class="string">            ./ $&#123;&#123; env.SSH_USER &#125;&#125;@$&#123;&#123; env.SSH_HOST &#125;&#125;:$&#123;&#123; env.DEST &#125;&#125;</span></span><br><span class="line"><span class="string"></span>        <span class="attr">env:</span></span><br><span class="line">          <span class="attr">SSH_HOST:</span> <span class="string">digitalocean</span></span><br><span class="line">          <span class="attr">DEST:</span> <span class="string">&quot;/var/www/your-domain/wp-content/plugins/plugin-folder&quot;</span></span><br></pre></td></tr></table></figure><p>这是一个包含了一些用于不同服务器的自定义工作流程的样例存储库。欢迎通过提交 pull request 添加更多工作流程或提出对当前工作流程的改进建议。如果对您有用,请考虑给存储库点个赞。</p><h2 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h2><p>本文介绍了如何使用 GitHub 操作自动部署 WordPress 插件和主题。通过将代码存储在 GitHub 仓库中并使用 GitHub Actions 的工作流程,您可以轻松地同步和部署您的 WordPress 插件和主题。这种集成可以显著提高开发效率,减少手动操作的时间和错误。</p>]]></content>
    
    
    <summary type="html">这是一篇介绍如何使用 GitHub Actions 自动部署 WordPress 插件和主题的教程文章,整体来说,这是一篇通俗易懂的 WordPress 自动部署教程,对需要使用 GitHub 集成来简化部署流程的人会有参考价值。</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="GitHub" scheme="https://www.nablepart.com/tags/GitHub/"/>
    
    <category term="WordPress" scheme="https://www.nablepart.com/tags/WordPress/"/>
    
  </entry>
  
  <entry>
    <title>利用 AI 增强人类的网络安全能力</title>
    <link href="https://www.nablepart.com/330f98e01519/"/>
    <id>https://www.nablepart.com/330f98e01519/</id>
    <published>2023-10-25T11:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>在当今数字化世界中，网络安全问题日益严重。如何利用人工智能（AI）技术增强人类的网络安全能力呢？本文将深入探讨这一主题，通过介绍AI在网络安全领域的应用场景、优势和概念，引导读者实际操作使用AI技术，并分享相关代码示例，帮助大家更好地理解和应用AI来提高网络安全防护。</p><h2 id="AI增强网络安全能力概述"><a href="#AI增强网络安全能力概述" class="headerlink" title="AI增强网络安全能力概述"></a>AI增强网络安全能力概述</h2><p>人工智能在网络安全领域的应用主要包括入侵检测、恶意软件分析、漏洞挖掘等方面。借助AI的智能化、自适应性和处理海量数据的能力，可以大大提升网络安全防护的水平。</p><h2 id="AI在网络安全领域的应用场景与优势"><a href="#AI在网络安全领域的应用场景与优势" class="headerlink" title="AI在网络安全领域的应用场景与优势"></a>AI在网络安全领域的应用场景与优势</h2><ol><li>入侵检测：通过分析网络流量数据，AI能够识别出异常行为，及时发现潜在的攻击。与传统的入侵检测系统相比，AI具有更高的准确率和自适应性。</li><li>恶意软件分析：AI技术可以通过分析软件的代码和行为，快速识别出恶意软件，从而有效地保护企业网络和终端安全。</li><li>漏洞挖掘：利用AI技术，可以自动扫描目标系统，寻找潜在的安全漏洞。与传统的手动扫描相比，AI可以更高效地发现并修复漏洞。</li></ol><h2 id="实际操作使用AI技术"><a href="#实际操作使用AI技术" class="headerlink" title="实际操作使用AI技术"></a>实际操作使用AI技术</h2><p>本节将通过一个案例来演示如何实际操作使用AI技术进行网络安全防护。</p><p>案例：使用机器学习算法进行入侵检测</p><ol><li>数据收集：从网络中收集流量数据，并标记正常流量和恶意流量。</li><li>数据预处理：对收集到的数据进行清洗、整理和特征提取。</li><li>模型构建：使用机器学习算法（如支持向量机、朴素贝叶斯等）构建分类器，将正常流量和恶意流量进行分类。</li><li>模型优化：通过调整模型参数和改进模型结构，提高分类器的准确率。</li><li>部署应用：将构建好的分类器部署到网络安全系统中，实时检测网络流量是否异常。</li></ol><h2 id="代码示例与讲解"><a href="#代码示例与讲解" class="headerlink" title="代码示例与讲解"></a>代码示例与讲解</h2><p>在这个部分，我们将提供一些关键步骤的代码示例，以便读者更好地理解和应用AI技术。需要注意的是，这些代码示例仅用于学习和参考目的，实际应用中可能需要进行适当修改和优化。</p><ol><li>数据收集与预处理<br>首先，我们需要从网络中收集流量数据，并进行预处理。在这个过程中，我们可以使用一些常见的工具和技术，如Scrapy框架（用于网络爬虫）、Pandas库（用于数据处理）等。</li></ol><p>示例代码：</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line"><span class="keyword">from</span> scrapy.selector <span class="keyword">import</span> Selector</span><br><span class="line"><span class="keyword">from</span> scrapy.http <span class="keyword">import</span> TextResponse</span><br><span class="line"><span class="keyword">import</span> requests</span><br></pre></td></tr></table></figure><ol start="2"><li>特征提取与模型构建<br>接下来，我们需要从数据中提取特征，并使用机器学习算法构建分类器。在这个过程中，我们可以使用一些常见的机器学习库，如scikit-learn。</li></ol><p>示例代码：</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn <span class="keyword">import</span> svm, datasets</span><br><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> train_test_split</span><br></pre></td></tr></table></figure><ol start="3"><li>模型优化与部署应用最后，我们需要通过调整模型参数和改进模型结构来优化分类器，并将分类器部署到网络安全系统中。在这个过程中，我们可以使用一些常见的优化方法和技术，如网格搜索、交叉验证等。此外，我们还可以使用一些常见的网络安全工具和技术，如Snort（开源的入侵检测系统）。</li></ol><p>示例代码：</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> GridSearchCV</span><br><span class="line"><span class="keyword">from</span> sklearn.metrics <span class="keyword">import</span> accuracy_score</span><br><span class="line"></span><br><span class="line"><span class="comment"># 加载数据集</span></span><br><span class="line">iris = datasets.load_iris()</span><br><span class="line">X = iris.data</span><br><span class="line">y = iris.target</span><br><span class="line"></span><br><span class="line"><span class="comment"># 划分训练集和测试集</span></span><br><span class="line">X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=<span class="number">0.2</span>, random_state=<span class="number">42</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 构建分类器</span></span><br><span class="line">clf = svm.SVC()</span><br><span class="line"></span><br><span class="line"><span class="comment"># 参数网格</span></span><br><span class="line">param_grid = &#123;<span class="string">&#x27;C&#x27;</span>: [<span class="number">0.1</span>, <span class="number">1</span>, <span class="number">10</span>, <span class="number">100</span>, <span class="number">1000</span>], <span class="string">&#x27;kernel&#x27;</span>: [<span class="string">&#x27;rbf&#x27;</span>], <span class="string">&#x27;gamma&#x27;</span>: [<span class="number">1e-7</span>, <span class="number">1e-6</span>, <span class="number">1e-5</span>, <span class="number">1e-4</span>]&#125;</span><br><span class="line"></span><br><span class="line"><span class="comment"># 使用网格搜索进行模型优化</span></span><br><span class="line">clf_grid = GridSearchCV(estimator=clf, param_grid=param_grid, cv=<span class="number">5</span>)</span><br><span class="line">clf_grid.fit(X_train, y_train)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 使用最优参数的分类器进行预测</span></span><br><span class="line">y_pred = clf_grid.predict(X_test)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 计算准确率</span></span><br><span class="line">accuracy = accuracy_score(y_test, y_pred)</span><br><span class="line"><span class="built_in">print</span>(<span class="string">f&quot;Accuracy: <span class="subst">&#123;accuracy&#125;</span>&quot;</span>)</span><br></pre></td></tr></table></figure><p>这段代码使用了Scikit-learn库实现了一个支持向量机分类器，并通过网格搜索进行了模型优化。然后使用优化后的模型对测试集进行了预测，并计算了准确率。在最后的输出中，我们可以看到模型在测试集上的准确率。</p><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>这个教程中，我们介绍了如何利用人工智能技术来增强人类的网络安全能力。通过实际操作使用AI技术进行网络安全防护的案例，以及相关的代码示例，我们展示了如何将AI应用于入侵检测、恶意软件分析和漏洞挖掘等领域。虽然人工智能在网络安全领域的应用已经取得了显著的进展，但仍然存在许多挑战和问题需要解决。例如，如何处理复杂的网络流量和多样化的攻击方式，如何提高模型的泛化能力和鲁棒性等。未来，随着人工智能技术的不断发展和应用场景的不断拓展，我们有理由相信，人工智能将在网络安全领域发挥更加重要的作用，为人类提供更加强大的安全保障。</p>]]></content>
    
    
    <summary type="html">本篇文章主要介绍了如何利用人工智能（AI）技术增强人类的网络安全能力。通过介绍AI在网络安全领域的应用场景、优势和概念，引导读者实际操作使用AI技术，并分享相关代码示例。</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="网络安全" scheme="https://www.nablepart.com/tags/%E7%BD%91%E7%BB%9C%E5%AE%89%E5%85%A8/"/>
    
  </entry>
  
  <entry>
    <title>Android并发编程的7个必要知识点</title>
    <link href="https://www.nablepart.com/951e4bbe0a1b/"/>
    <id>https://www.nablepart.com/951e4bbe0a1b/</id>
    <published>2023-10-24T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>在现代Android应用开发中,协程(Coroutine)已经成为一种不可或缺的技术。它不仅简化了异步编程,还提供了许多强大的工具和功能,可以在高阶场景中发挥出色的表现。</p><h2 id="1-协程基础"><a href="#1-协程基础" class="headerlink" title="1. 协程基础"></a>1. 协程基础</h2><p>协程是一种能够在代码中实现顺序性操作的同时处理异步任务的并发机制。它不仅能够简化异步编程,还可以提高代码的可读性和维护性。协程通过挂起函数(suspend函数)实现异步操作,而不会阻塞线程。</p><p>在Kotlin中,使用launch函数创建和启动协程,它返回一个Job实例,代表了协程的生命周期。协程代码块位于launch函数的大括号内。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">  <span class="comment">// 创建协程</span></span><br><span class="line">  <span class="keyword">val</span> job = GlobalScope.launch &#123;</span><br><span class="line">    <span class="comment">// 协程代码块  </span></span><br><span class="line">    delay(<span class="number">1000</span>)</span><br><span class="line">    println(<span class="string">&quot;Hello from Coroutine!&quot;</span>)</span><br><span class="line">  &#125;</span><br><span class="line"></span><br><span class="line">  <span class="comment">// 等待协程完成</span></span><br><span class="line">  runBlocking &#123; </span><br><span class="line">    job.join()</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>取消协程是一种优雅地结束协程的方式,避免资源泄漏。协程可以通过调用cancel函数来取消。另外,当协程的父协程被取消时,所有的子协程也会被取消。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> = runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> job = launch &#123;</span><br><span class="line">    <span class="keyword">try</span> &#123;</span><br><span class="line">      delay(<span class="number">1000</span>)</span><br><span class="line">      println(<span class="string">&quot;Coroutine completed.&quot;</span>)  </span><br><span class="line">    &#125; <span class="keyword">catch</span> (e: CancellationException) &#123;</span><br><span class="line">      println(<span class="string">&quot;Coroutine was cancelled.&quot;</span>)</span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line">  </span><br><span class="line">  delay(<span class="number">500</span>) </span><br><span class="line">  job.cancel() <span class="comment">// 取消协程</span></span><br><span class="line">  job.join()</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程内部的异常可以通过try和catch来捕获和处理。如果协程内部抛出异常,它会被传递到协程的调用者处。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> = runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> job = launch &#123;</span><br><span class="line">    <span class="keyword">try</span> &#123;</span><br><span class="line">      <span class="keyword">throw</span> Exception(<span class="string">&quot;Something went wrong&quot;</span>)</span><br><span class="line">    &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">      println(<span class="string">&quot;Exception caught: <span class="subst">$&#123;e.message&#125;</span>&quot;</span>) </span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line"></span><br><span class="line">  job.join()  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="2-上下文与调度器"><a href="#2-上下文与调度器" class="headerlink" title="2. 上下文与调度器"></a>2. 上下文与调度器</h2><p>协程上下文和调度器是Kotlin Coroutine中的核心概念,它们决定了协程的执行环境和线程。合理使用不同的调度器,可以使协程在不同的线程上高效地执行,从而实现并发处理和性能优化。</p><p>协程上下文是协程运行时的环境,包含了许多不同的元素,如调度器、异常处理器等。调度器(Dispatcher)是上下文的一部分,它决定了协程在哪个线程上执行。Kotlin提供了几种内置的调度器,例如Dispatchers.Main、Dispatchers.IO、Dispatchers.Default等。</p><p>使用不同的调度器,我们可以在不同的线程上执行协程代码,从而优化并发处理和性能。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.IO) &#123;</span><br><span class="line">  <span class="comment">// 在IO线程上执行协程代码,适用于网络请求和文件操作</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch(Dispatchers.Default) &#123;</span><br><span class="line">  <span class="comment">// 在默认的线程池上执行协程代码,适用于CPU密集型操作 </span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>使用withContext函数可以在协程内部切换线程,从而避免阻塞主线程,同时保持协程的执行上下文。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="comment">// 在IO线程上执行异步操作</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 在UI线程处理结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>除了内置的调度器,你还可以创建自定义的调度器来满足特定需求,例如使用特定的线程池或调度算法。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> customDispatcher = Executors.newFixedThreadPool(<span class="number">4</span>).asCoroutineDispatcher()</span><br><span class="line"></span><br><span class="line">launch(customDispatcher) &#123;</span><br><span class="line">  <span class="comment">// 在自定义调度器上执行协程代码</span></span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure><p>协程上下文和调度器的合理使用可以使协程在不同的线程上高效地执行,并发处理和性能优化为异步编程带来更多便利。</p><h2 id="3-挂起函数"><a href="#3-挂起函数" class="headerlink" title="3. 挂起函数"></a>3. 挂起函数</h2><p>挂起函数是Kotlin Coroutine中的重要组成部分,它允许在协程中优雅地处理异步操作。通过掌握挂起函数的调用、编写和异常处理,你可以更好地在协程中处理异步操作,确保代码的可靠性和稳定性。</p><p>挂起函数是具有suspend关键字修饰的函数,它可以在协程内部被挂起,等待某个操作完成后再继续执行。典型的例子包括网络请求、文件读写、数据库查询等异步操作。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">suspend</span> <span class="function"><span class="keyword">fun</span> <span class="title">fetchUserData</span><span class="params">()</span></span>: UserData &#123;</span><br><span class="line">  <span class="comment">// 执行异步操作,等待数据返回</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程内部调用挂起函数是直接的,你可以像调用普通函数一样调用挂起函数,而无需关心线程的切换。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> userData = fetchUserData()</span><br><span class="line">  <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程中,异常处理是非常重要的一部分。使用try和catch来捕获挂起函数中抛出的异常,确保代码的健壮性。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">try</span> &#123;</span><br><span class="line">    <span class="keyword">val</span> userData = fetchUserData()</span><br><span class="line">    <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">    <span class="comment">// 处理异常情况</span></span><br><span class="line">  &#125;</span><br><span class="line">&#125;  </span><br></pre></td></tr></table></figure><p>当协程被取消时,挂起函数也会被取消。协程的取消机制可以确保及时释放资源,避免资源泄漏。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">try</span> &#123;</span><br><span class="line">    <span class="keyword">val</span> userData = fetchUserData() </span><br><span class="line">    <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: CancellationException) &#123;</span><br><span class="line">    <span class="comment">// 协程被取消时的处理</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">    <span class="comment">// 其他异常情况</span></span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程范围(coroutineScope函数)可以在挂起函数内部创建新的协程,它会等待所有的子协程完成后再继续执行。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">suspend</span> <span class="function"><span class="keyword">fun</span> <span class="title">performMultipleTasks</span><span class="params">()</span></span> = coroutineScope &#123;</span><br><span class="line">  <span class="keyword">val</span> result1 = async &#123; fetchFromNetwork() &#125;</span><br><span class="line">  <span class="keyword">val</span> result2 = async &#123; fetchFromDatabase() &#125;</span><br><span class="line">  <span class="keyword">val</span> combinedResult = result1.await() + result2.await()</span><br><span class="line">  <span class="comment">// 处理并发任务的结果  </span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>挂起函数是Kotlin Coroutine中的重要组成部分,通过掌握挂起函数的概念、调用、编写和异常处理,你可以更好地在协程中处理异步操作,确保代码的可靠性和稳定性。</p><h2 id="4-协程作用域"><a href="#4-协程作用域" class="headerlink" title="4. 协程作用域"></a>4. 协程作用域</h2><p>协程作用域为我们提供了一种优雅且可控的方式来管理协程的生命周期和范围。通过合理地创建作用域并结合结构化并发,我们可以避免资源泄漏、提高代码的可读性,并确保协程在正确的上下文中执行,为异步编程带来更多便利。</p><p>协程作用域是一个上下文(CoroutineScope)的实例,用于创建和管理相关联的协程。通过将协程限定在特定的作用域内,我们可以更好地控制它们的生命周期。协程作用域通常与Activity、Fragment或ViewModel等相关联,以确保在组件销毁时取消所有协程,避免资源泄漏。</p><p>在Kotlin中,我们可以使用CoroutineScope来创建协程作用域。例如,在Activity中:</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">MyActivity</span> : <span class="type">AppCompatActivity</span>(), CoroutineScope <span class="keyword">by</span> CoroutineScope(Dispatchers.Main) &#123;</span><br><span class="line"></span><br><span class="line">  <span class="comment">// ...</span></span><br><span class="line"></span><br><span class="line">  <span class="keyword">override</span> <span class="function"><span class="keyword">fun</span> <span class="title">onDestroy</span><span class="params">()</span></span> &#123;</span><br><span class="line">    <span class="keyword">super</span>.onDestroy()</span><br><span class="line">    cancel() <span class="comment">// 取消协程作用域内的所有协程 </span></span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程作用域内启动协程时,它们会继承作用域的上下文和调度器。这意味着它们将在相同的线程上运行,并受到相同的取消影响。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="comment">// 在协程作用域内启动协程</span></span><br><span class="line">  <span class="comment">// 该协程将继承外部作用域的上下文和调度器</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程作用域可以嵌套,内部作用域的协程会继承外部作用域的上下文。这使得我们可以在更细粒度的范围内管理协程的生命周期。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">MyActivity</span> : <span class="type">AppCompatActivity</span>(), CoroutineScope <span class="keyword">by</span> CoroutineScope(Dispatchers.Main) &#123;</span><br><span class="line"></span><br><span class="line">  <span class="comment">// ...</span></span><br><span class="line"></span><br><span class="line">  <span class="function"><span class="keyword">fun</span> <span class="title">performMultipleTasks</span><span class="params">()</span></span> = launch &#123;    </span><br><span class="line">    <span class="comment">// 在外部作用域的协程内启动协程</span></span><br><span class="line">    launch &#123;</span><br><span class="line">      <span class="comment">// 在内部作用域的协程内启动协程</span></span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>结构化并发是协程作用域的一个重要特性,它可以确保在作用域中的所有协程完成后才继续执行。这有助于避免竞态条件和资源泄漏。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">runBlocking &#123;</span><br><span class="line">  <span class="comment">// 在结构化并发作用域内启动协程</span></span><br><span class="line">  launch &#123;</span><br><span class="line">    <span class="comment">// 协程1</span></span><br><span class="line">  &#125;</span><br><span class="line">  launch &#123;</span><br><span class="line">    <span class="comment">// 协程2</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 等待所有协程完成后继续</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程作用域为我们提供了一种优雅且可控的方式来管理协程的生命周期和范围。通过合理地创建作用域并结合结构化并发,我们可以避免资源泄漏、提高代码的可读性,并确保协程在正确的上下文中执行,为异步编程带来更多便利。</p><h2 id="5-并发与顺序性"><a href="#5-并发与顺序性" class="headerlink" title="5. 并发与顺序性"></a>5. 并发与顺序性</h2><p>在异步编程中,既需要处理多个任务的并发执行,也需要确保一些操作按照特定的顺序执行。Kotlin Coroutine提供了灵活的机制来处理并发和顺序性操作,同时能够简化多个协程的组合。</p><p>协程使并发任务的管理变得非常直观。通过使用launch函数,我们可以在不同的协程中同时执行多个任务,而这些协程可以在相同的作用域内运行,继承相同的上下文和调度器。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> result1 = async &#123; fetchFromNetwork() &#125;</span><br><span class="line">  <span class="keyword">val</span> result2 = async &#123; fetchFromDatabase() &#125;</span><br><span class="line">  <span class="keyword">val</span> combinedResult = result1.await() + result2.await()</span><br><span class="line">  <span class="comment">// 处理并发任务的结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>有时,我们需要确保一些操作按照特定的顺序执行,例如先从数据库读取数据,然后再进行网络请求。协程提供了async函数来实现这种顺序性操作,通过await等待前一个操作的完成。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> dataFromDatabase = async &#123; fetchFromDatabase() &#125;.await()</span><br><span class="line">  <span class="keyword">val</span> updatedData = async &#123; performNetworkRequest(dataFromDatabase) &#125;.await() </span><br><span class="line">  <span class="comment">// 处理顺序性操作的结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在复杂的场景中,可能需要组合多个协程的执行流程,以满足特定的需求。async和await的组合,以及协程的结构化并发,可以帮助我们实现这种复杂的协程调度。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="keyword">val</span> dataFromDatabase = async &#123; fetchFromDatabase() &#125;.await()</span><br><span class="line">    <span class="keyword">val</span> updatedData = async &#123; performNetworkRequest(dataFromDatabase) &#125;.await()</span><br><span class="line">    <span class="comment">// 更多操作...</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 处理最终结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>并发与顺序性是异步编程中常见的需求,Kotlin Coroutine提供了灵活且简洁的机制来处理这些需求。通过合理地使用launch、async、await和结构化并发,我们可以轻松地处理多个任务的并发执行和顺序性操作。</p><h2 id="6-协程间通信"><a href="#6-协程间通信" class="headerlink" title="6. 协程间通信"></a>6. 协程间通信</h2><p>在并发编程中,协程间的通信非常重要。Kotlin Coroutine提供了多种方式来实现协程间的通信,例如使用通道(Channel)进行数据交换和协程间的协作。</p><p>协程通道是一种能够在多个协程之间传递数据的并发原语。它类似于队列,支持发送(send)和接收(receive)操作。通过通道,我们可以实现协程间的数据共享和同步。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> channel = Channel&lt;<span class="built_in">Int</span>&gt;()</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    delay(<span class="number">1000</span>)</span><br><span class="line">    channel.send(it)</span><br><span class="line">  &#125;</span><br><span class="line">  channel.close() </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">for</span> (value <span class="keyword">in</span> channel) &#123;</span><br><span class="line">    println(value)</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程间的协作是一种更高级的通信方式,通过协程的挂起和恢复来实现。例如,使用yield函数可以让出当前协程的执行权,让其他协程有机会执行。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    println(<span class="string">&quot;Coroutine 1&quot;</span>)</span><br><span class="line">    yield()</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    println(<span class="string">&quot;Coroutine 2&quot;</span>)</span><br><span class="line">    yield()</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程间通信是并发编程中的重要部分,Kotlin Coroutine提供了多种机制来实现协程间的数据共享和协作。通过合理地使用通道和协程的挂起与恢复,我们可以实现灵活且高效的协程间通信。</p><h2 id="7-协程在UI线程中的使用"><a href="#7-协程在UI线程中的使用" class="headerlink" title="7. 协程在UI线程中的使用"></a>7. 协程在UI线程中的使用</h2><p>在Android应用开发中,协程可以在UI线程中使用,从而实现非阻塞的异步操作。这可以避免阻塞主线程,提高用户界面的响应性能。</p><p>在Android中,可以使用Dispatchers.Main调度器将协程的执行切换到主线程。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.Main) &#123;</span><br><span class="line">  <span class="comment">// 在UI线程上执行协程代码 </span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程中执行UI操作时,需要注意避免长时间的阻塞操作,以免影响用户界面的流畅性。可以使用withContext将耗时的操作切换到后台线程,然后在UI线程上处理结果。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.Main) &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="comment">// 在后台线程执行耗时操作</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 在UI线程处理结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程在UI线程中的使用可以提高Android应用的响应性能,避免阻塞主线程。通过合理地使用Dispatchers.Main调度器和withContext函数,我们可以优化UI操作的执行,提升用户体验。</p><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>本文深入探讨了Android并发编程中的七个必要知识点,包括协程基础、上下文与调度器、挂起函数、协程作用域、并发与顺序性、协程间通信和协程在UI线程中的使用。通过合理地使用这些知识点,开发者可以更好地利用协程来构建高效的Android应用。希望对你有帮助!</p>]]></content>
    
    
    <summary type="html">本文将深入探讨Android并发编程的七个必要知识点,帮助开发者更好地利用协程来构建高效的Android应用。</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="Android" scheme="https://www.nablepart.com/tags/Android/"/>
    
    <category term="Coroutine" scheme="https://www.nablepart.com/tags/Coroutine/"/>
    
    <category term="应用开发" scheme="https://www.nablepart.com/tags/%E5%BA%94%E7%94%A8%E5%BC%80%E5%8F%91/"/>
    
  </entry>
  
  <entry>
    <title>Git Commands for Efficient Development-A Comprehensive Guide</title>
    <link href="https://www.nablepart.com/3f03a91a8618/"/>
    <id>https://www.nablepart.com/3f03a91a8618/</id>
    <published>2023-10-22T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>Git is an essential tool for version control and collaboration in software development. Whether you’re a beginner or an experienced developer, having a solid understanding of Git commands is crucial for efficient and productive work. In this comprehensive guide, we will explore various Git commands and their applications in day-to-day development tasks. By the end of this article, you’ll have a firm grasp of the most important Git commands and how to use them effectively.</p></blockquote><h2 id="Table-of-Contents"><a href="#Table-of-Contents" class="headerlink" title="Table of Contents"></a>Table of Contents</h2><ol><li>Introduction to Git<ul><li>What is Git?</li><li>Why is Git important?</li></ul></li><li>Configuring Git<ul><li>Setting up user information</li><li>Generating SSH keys</li></ul></li><li>Managing Remote Repositories<ul><li>Initializing a repository</li><li>Viewing remote repositories</li><li>Adding and removing remote repositories</li><li>Cloning a remote repository</li></ul></li><li>Branching and Merging<ul><li>Creating and switching branches</li><li>Deleting branches</li><li>Merging branches</li></ul></li><li>Checking Out Commits<ul><li>Checking out a specific commit</li><li>Creating a new branch from a commit</li><li>Discarding changes in the working directory</li></ul></li><li>Tracking Changes<ul><li>Checking the status of the repository</li><li>Staging changes</li><li>Committing changes</li><li>Amending commits</li></ul></li><li>Pulling and Pushing<ul><li>Pulling changes from a remote repository</li><li>Pushing changes to a remote repository</li></ul></li><li>Resolving Conflicts<ul><li>Identifying and understanding conflicts</li><li>Resolving conflicts manually</li><li>Using merge tools to resolve conflicts</li></ul></li><li>Stashing Changes<ul><li>Stashing changes for later use</li><li>Applying stashed changes</li><li>Clearing stash entries</li></ul></li><li>Version Tagging<ul><li>Creating tags</li><li>Pushing tags to remote repositories</li><li>Deleting tags</li></ul></li></ol><h2 id="1-Introduction-to-Git"><a href="#1-Introduction-to-Git" class="headerlink" title="1. Introduction to Git"></a>1. Introduction to Git</h2><h3 id="What-is-Git"><a href="#What-is-Git" class="headerlink" title="What is Git?"></a>What is Git?</h3><p>Git is a distributed version control system that allows multiple developers to collaborate on a project efficiently. It tracks changes to files and directories, creates a history of commits, and enables easy merging of changes from different branches. Git provides a reliable and flexible way to manage code, making it the industry standard for version control.</p><h3 id="Why-is-Git-important"><a href="#Why-is-Git-important" class="headerlink" title="Why is Git important?"></a>Why is Git important?</h3><p>Git offers several benefits for developers and development teams. It allows for easy collaboration, as developers can work on different branches and merge their changes seamlessly. Git also provides a complete history of changes, making it easier to track and revert to previous versions if necessary. Additionally, Git enables efficient and reliable deployment processes, ensuring that software releases are stable and error-free.</p><h2 id="2-Configuring-Git"><a href="#2-Configuring-Git" class="headerlink" title="2. Configuring Git"></a>2. Configuring Git</h2><p>Before starting to use Git, it’s important to configure your user information and set up SSH keys for secure communication with remote repositories.</p><h3 id="Setting-up-user-information"><a href="#Setting-up-user-information" class="headerlink" title="Setting up user information"></a>Setting up user information</h3><p>To set your global user name and email address, you can use the following Git command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">git config --global user.name &quot;Your Name&quot;</span><br><span class="line">git config --global user.email &quot;your-email@example.com&quot;</span><br></pre></td></tr></table></figure><p>These settings will be used for all your Git repositories unless you override them locally.</p><h3 id="Generating-SSH-keys"><a href="#Generating-SSH-keys" class="headerlink" title="Generating SSH keys"></a>Generating SSH keys</h3><p>To generate SSH keys for secure authentication with remote repositories, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ssh-keygen -t rsa -b 4096 -C &quot;your-email@example.com&quot;</span><br></pre></td></tr></table></figure><p>This will generate a public and private key pair. The public key should be added to your Git hosting provider, while the private key should be kept secure on your local machine.</p><h2 id="3-Managing-Remote-Repositories"><a href="#3-Managing-Remote-Repositories" class="headerlink" title="3. Managing Remote Repositories"></a>3. Managing Remote Repositories</h2><p>Git allows you to work with remote repositories, either by initializing a new repository or by cloning an existing one.</p><h3 id="Initializing-a-repository"><a href="#Initializing-a-repository" class="headerlink" title="Initializing a repository"></a>Initializing a repository</h3><p>To initialize a new repository, navigate to the desired directory and run the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git init</span><br></pre></td></tr></table></figure><p>This will create a new Git repository in the current directory. You can then start tracking changes and making commits.</p><h3 id="Viewing-remote-repositories"><a href="#Viewing-remote-repositories" class="headerlink" title="Viewing remote repositories"></a>Viewing remote repositories</h3><p>To view the remote repositories associated with your local repository, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git remote -v</span><br></pre></td></tr></table></figure><p>This will display the name and URL of the remote repositories linked to your local repository.</p><h3 id="Adding-and-removing-remote-repositories"><a href="#Adding-and-removing-remote-repositories" class="headerlink" title="Adding and removing remote repositories"></a>Adding and removing remote repositories</h3><p>To add a new remote repository, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git remote add origin https://github.com/your-username/your-repository.git</span><br></pre></td></tr></table></figure><p>Replace the URL with the actual URL of the remote repository you want to add.</p><p>To remove a remote repository, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git remote remove origin</span><br></pre></td></tr></table></figure><p>This will remove the remote repository named “origin” from your local repository.</p><h3 id="Cloning-a-remote-repository"><a href="#Cloning-a-remote-repository" class="headerlink" title="Cloning a remote repository"></a>Cloning a remote repository</h3><p>To clone an existing remote repository to your local machine, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git clone https://github.com/your-username/your-repository.git</span><br></pre></td></tr></table></figure><p>This will create a local copy of the remote repository, including all branches and commit history.</p><h2 id="4-Branching-and-Merging"><a href="#4-Branching-and-Merging" class="headerlink" title="4. Branching and Merging"></a>4. Branching and Merging</h2><p>Git’s branching and merging capabilities are fundamental for collaborative development. Branches allow you to work on different features or bug fixes independently, while merging combines the changes from different branches into a single branch.</p><h3 id="Creating-and-switching-branches"><a href="#Creating-and-switching-branches" class="headerlink" title="Creating and switching branches"></a>Creating and switching branches</h3><p>To create a new branch, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git branch new-branch</span><br></pre></td></tr></table></figure><p>Replace “new-branch” with the desired name for your new branch. To switch to the newly created branch, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git checkout new-branch</span><br></pre></td></tr></table></figure><p>This will switch your working directory to the new branch, allowing you to make changes specific to that branch.</p><h3 id="Deleting-branches"><a href="#Deleting-branches" class="headerlink" title="Deleting branches"></a>Deleting branches</h3><p>To delete a branch, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git branch -d branch-to-delete</span><br></pre></td></tr></table></figure><p>Replace “branch-to-delete” with the name of the branch you want to delete. Note that you cannot delete the branch you are currently on. If you want to force delete a branch, you can use the -D option instead of -d.</p><h3 id="Merging-branches"><a href="#Merging-branches" class="headerlink" title="Merging branches"></a>Merging branches</h3><p>To merge changes from one branch into another, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git merge source-branch</span><br></pre></td></tr></table></figure><p>Replace “source-branch” with the name of the branch you want to merge into the current branch. Git will automatically merge the changes and create a new commit.</p><h2 id="5-Checking-Out-Commits"><a href="#5-Checking-Out-Commits" class="headerlink" title="5. Checking Out Commits"></a>5. Checking Out Commits</h2><p>Git allows you to check out specific commits, enabling you to view and modify the code at a previous state.</p><h3 id="Checking-out-a-specific-commit"><a href="#Checking-out-a-specific-commit" class="headerlink" title="Checking out a specific commit"></a>Checking out a specific commit</h3><p>To check out a specific commit, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git checkout commit-id</span><br></pre></td></tr></table></figure><p>Replace “commit-id” with the ID of the commit you want to check out. Git will update your working directory to reflect the state of the code at that commit.</p><h3 id="Creating-a-new-branch-from-a-commit"><a href="#Creating-a-new-branch-from-a-commit" class="headerlink" title="Creating a new branch from a commit"></a>Creating a new branch from a commit</h3><p>To create a new branch based on a specific commit, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git checkout -b new-branch commit-id</span><br></pre></td></tr></table></figure><p>Replace “new-branch” with the desired name for your new branch, and “commit-id” with the ID of the commit you want to base the branch on.</p><h3 id="Discarding-changes-in-the-working-directory"><a href="#Discarding-changes-in-the-working-directory" class="headerlink" title="Discarding changes in the working directory"></a>Discarding changes in the working directory</h3><p>To discard all changes in the working directory and revert to the last committed state, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git checkout .</span><br></pre></td></tr></table></figure><p>This will remove all modifications to tracked files in the working directory.</p><h2 id="6-Tracking-Changes"><a href="#6-Tracking-Changes" class="headerlink" title="6. Tracking Changes"></a>6. Tracking Changes</h2><p>Git provides various commands to track changes, stage them for commit, and commit them to the repository.</p><h3 id="Checking-the-status-of-the-repository"><a href="#Checking-the-status-of-the-repository" class="headerlink" title="Checking the status of the repository"></a>Checking the status of the repository</h3><p>To check the status of the repository and see which files have been modified, added, or deleted, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git status</span><br></pre></td></tr></table></figure><p>Git will display a summary of the current state of the repository, including any changes that have not been committed.</p><h3 id="Staging-changes"><a href="#Staging-changes" class="headerlink" title="Staging changes"></a>Staging changes</h3><p>Before committing changes, you need to stage them by adding them to the index. To stage all changes, use the following command: </p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git add .</span><br></pre></td></tr></table></figure><p>This command adds all modified and new files to the staging area. If you only want to stage specific files or directories, you can provide their paths as arguments to the git add command.</p><h3 id="Committing-changes"><a href="#Committing-changes" class="headerlink" title="Committing changes"></a>Committing changes</h3><p>To commit the staged changes to the repository, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git commit -m &quot;Commit message&quot; </span><br></pre></td></tr></table></figure><p>Replace “Commit message” with a descriptive message about the changes you are committing. This message helps track the history of the repository and understand the purpose of each commit.</p><h3 id="Amending-commits"><a href="#Amending-commits" class="headerlink" title="Amending commits"></a>Amending commits</h3><p>If you need to modify the last commit, you can use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git commit --amend</span><br></pre></td></tr></table></figure><p>This command opens a text editor where you can modify the commit message. You can also add or remove changes from the commit by staging or unstaging files before saving.</p><h2 id="7-Pulling-and-Pushing"><a href="#7-Pulling-and-Pushing" class="headerlink" title="7. Pulling and Pushing"></a>7. Pulling and Pushing</h2><p>To collaborate effectively with other developers, you need to be able to pull changes from a remote repository and push your changes to it.</p><h3 id="Pulling-changes-from-a-remote-repository"><a href="#Pulling-changes-from-a-remote-repository" class="headerlink" title="Pulling changes from a remote repository"></a>Pulling changes from a remote repository</h3><p>To fetch and merge changes from a remote repository into your local branch, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git pull origin branch-name</span><br></pre></td></tr></table></figure><p>Replace “origin” with the name of the remote repository and “branch-name” with the name of the branch you want to pull.</p><h3 id="Pushing-changes-to-a-remote-repository"><a href="#Pushing-changes-to-a-remote-repository" class="headerlink" title="Pushing changes to a remote repository"></a>Pushing changes to a remote repository</h3><p>To push your local changes to a remote repository, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git push origin branch-name</span><br></pre></td></tr></table></figure><p>Replace “origin” with the name of the remote repository and “branch-name” with the name of the branch you want to push.</p><h2 id="8-Resolving-Conflicts"><a href="#8-Resolving-Conflicts" class="headerlink" title="8. Resolving Conflicts"></a>8. Resolving Conflicts</h2><p>When working on a collaborative project, conflicts can arise when merging changes from different branches. Git provides tools to help you identify and resolve these conflicts.</p><h3 id="Identifying-and-understanding-conflicts"><a href="#Identifying-and-understanding-conflicts" class="headerlink" title="Identifying and understanding conflicts"></a>Identifying and understanding conflicts</h3><p>When a conflict occurs during a merge, Git will mark the conflicting sections in the affected files with special markers. These markers indicate the conflicting changes from each branch, and you need to manually resolve the conflicts.</p><h3 id="Resolving-conflicts-manually"><a href="#Resolving-conflicts-manually" class="headerlink" title="Resolving conflicts manually"></a>Resolving conflicts manually</h3><p>To resolve conflicts manually, open the conflicting file in a text editor and locate the conflict markers. Edit the file to remove the conflicting sections and keep the desired changes. Once you have resolved all conflicts, save the file and stage it for commit.</p><h3 id="Using-merge-tools-to-resolve-conflicts"><a href="#Using-merge-tools-to-resolve-conflicts" class="headerlink" title="Using merge tools to resolve conflicts"></a>Using merge tools to resolve conflicts</h3><p>Git also provides merge tools that can assist in resolving conflicts. These tools provide a graphical interface to highlight and resolve conflicts. Popular merge tools include KDiff3, Beyond Compare, and P4Merge.</p><h2 id="9-Stashing-Changes"><a href="#9-Stashing-Changes" class="headerlink" title="9. Stashing Changes"></a>9. Stashing Changes</h2><p>Sometimes, you may need to temporarily store your changes without committing them. Git provides the stash feature for this purpose.</p><h3 id="Stashing-changes-for-later-use"><a href="#Stashing-changes-for-later-use" class="headerlink" title="Stashing changes for later use"></a>Stashing changes for later use</h3><p>To stash your changes, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git stash</span><br></pre></td></tr></table></figure><p>This will save your current changes and revert the working directory to the last committed state.</p><h3 id="Applying-stashed-changes"><a href="#Applying-stashed-changes" class="headerlink" title="Applying stashed changes"></a>Applying stashed changes</h3><p>To apply the most recent stash, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git stash apply</span><br></pre></td></tr></table></figure><p>This will apply the changes from the stash and leave the stash intact. If you have multiple stashes, you can specify the stash ID to apply a specific stash.</p><h3 id="Clearing-stash-entries"><a href="#Clearing-stash-entries" class="headerlink" title="Clearing stash entries"></a>Clearing stash entries</h3><p>To remove all stash entries, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git stash clear</span><br></pre></td></tr></table></figure><p>This will permanently delete all stash entries, freeing up storage space.</p><h2 id="10-Version-Tagging"><a href="#10-Version-Tagging" class="headerlink" title="10. Version Tagging"></a>10. Version Tagging</h2><p>Git allows you to create tags to mark specific versions of your code. Tags are useful for referencing specific commits and marking significant milestones in your project’s history.</p><h3 id="Creating-tags"><a href="#Creating-tags" class="headerlink" title="Creating tags"></a>Creating tags</h3><p>To create a lightweight tag, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git tag tag-name</span><br></pre></td></tr></table></figure><p>Replace “tag-name” with the desired name for your tag. Lightweight tags are simply pointers to specific commits.</p><p>To create an annotated tag with additional information, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git tag -a tag-name -m &quot;Tag message&quot;</span><br></pre></td></tr></table></figure><p>Replace “tag-name” with the desired name for your tag and “Tag message” with a descriptive message.</p><h3 id="Pushing-tags-to-remote-repositories"><a href="#Pushing-tags-to-remote-repositories" class="headerlink" title="Pushing tags to remote repositories"></a>Pushing tags to remote repositories</h3><p>To push tags to a remote repository, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git push origin --tags</span><br></pre></td></tr></table></figure><p>This command pushes all local tags to the remote repository.</p><h3 id="Deleting-tags"><a href="#Deleting-tags" class="headerlink" title="Deleting tags"></a>Deleting tags</h3><p>To delete a local tag, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git tag -d tag-name</span><br></pre></td></tr></table></figure><p>Replace “tag-name” with the name of the tag you want to delete.</p><p>To delete a remote tag, use the following command:</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git push origin --delete tag-name</span><br></pre></td></tr></table></figure><p>Replace “tag-name” with the name of the tag you want to delete.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>In this comprehensive guide, we covered the most important Git commands for efficient development. From configuring Git and managing remote repositories to branching, merging, and resolving conflicts, you now have a solid foundation in using Git for version control and collaboration. By mastering these commands and understanding their applications, you’ll be able to streamline your development workflow and contribute effectively to any Git-based project.</p><p><strong>Remember to always refer to the official Git documentation and explore additional resources to deepen your knowledge and explore advanced Git features. Happy coding!</strong></p>]]></content>
    
    
    <summary type="html">This guide covers fundamental Git commands for software developers. It explains how to configure Git, work with remote repositories, manage branches, track changes, merge code, resolve conflicts, stash changes, tag releases etc.</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="Git" scheme="https://www.nablepart.com/tags/Git/"/>
    
    <category term="Version Control" scheme="https://www.nablepart.com/tags/Version-Control/"/>
    
    <category term="Code Management" scheme="https://www.nablepart.com/tags/Code-Management/"/>
    
  </entry>
  
  <entry>
    <title>6.6k Stars! Heimdall, a highly customizable configuration navigation aggregation page.</title>
    <link href="https://www.nablepart.com/0ff6821e30e3/"/>
    <id>https://www.nablepart.com/0ff6821e30e3/</id>
    <published>2023-10-21T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.786Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>The Internet is full of endless information and Web sites, which can feel like information overload. The power of the navigation page is that it provides users with a focal point to quickly access the sites they use frequently without having to type in the URL or perform tedious searches each time.</p></blockquote><h2 id="Application-Overview"><a href="#Application-Overview" class="headerlink" title="Application Overview"></a>Application Overview</h2><p>Heimdall is an open source, self-hosted application dashboard designed to help users centrally manage their web services, applications and resources. The navigation page can be personalized to suit your needs and interests, bringing together the most frequently visited websites in one convenient place. This allows users to glance at our own frequently used websites on the same page without having to frequently switch between different tabs or windows, maintaining a neat and efficient browsing experience.</p><h2 id="Main-Features"><a href="#Main-Features" class="headerlink" title="Main Features"></a>Main Features</h2><ol><li><strong>Personalized Dashboard:</strong> Heimdall provides a customizable dashboard that allows users to add, move, and organize their frequently used apps, services, and links to suit their needs.</li><li><strong>Application and Service Integration:</strong> Users can integrate a variety of web applications and services into the Heimdall dashboard, including websites, self-hosted apps, Docker containers, cloud services, and more, so that they can be easily accessed in one location.</li><li><strong>Appearance and Theme Customization:</strong> Heimdall allows users to customize the appearance and theme of their dashboards, including choosing different layouts, color schemes, and icons to meet their aesthetic preferences.</li><li><strong>Search Functionality:</strong> Heimdall is equipped with powerful search functionality that can simultaneously support multiple search engines such as Google, Bing, DuckDuckGo and more.</li><li><strong>Security and Authentication:</strong> Users can add security measures such as usernames and passwords or other authentication methods to ensure that only authorized users can access the dashboard, helping to protect sensitive information and resources.</li><li><strong>Multi-User Support:</strong> Heimdall supports multiple users, which means that team members can share a dashboard and access and edit it based on permission levels, facilitating collaboration and resource sharing.</li><li><strong>Open Source and Free:</strong> Heimdall is an open source project that is completely free to use and can be customized and modified by the user to adapt it to specific requirements.</li><li><strong>Quick Links:</strong> Users can easily add quick links to their favorite websites for quick access.</li></ol><h2 id="Application-Features"><a href="#Application-Features" class="headerlink" title="Application Features"></a>Application Features</h2><h3 id="I-Support-multiple-search-engines"><a href="#I-Support-multiple-search-engines" class="headerlink" title="I. Support multiple search engines"></a>I. Support multiple search engines</h3><p>Heimdall supports multiple search engines, enabling users to select their preferred search engine based on their preferences and quickly access these search engines in the dashboard.</p><p><img src="https://grstatic.oss-cn-shanghai.aliyuncs.com/marketplace/Heimdall/%E6%90%9C%E7%B4%A2.png"></p><h3 id="II-Customize-Upload-Background"><a href="#II-Customize-Upload-Background" class="headerlink" title="II. Customize Upload Background"></a>II. Customize Upload Background</h3><p>The background of the dashboard can be easily customized by uploading your favorite image or selecting other background options to make the dashboard more personalized.</p><p><img src="https://grstatic.oss-cn-shanghai.aliyuncs.com/marketplace/Heimdall/%E8%83%8C%E6%99%AF.png"></p><h3 id="III-Editing-Configurable-Items"><a href="#III-Editing-Configurable-Items" class="headerlink" title="III. Editing Configurable Items"></a>III. Editing Configurable Items</h3><p>Heimdall provides a wide range of editing and configuration options as well as a preview function, which allows users to adjust the dashboard’s layout, colors, fonts, and other configurable items according to their own needs to meet their personalized needs.</p><p><img src="https://grstatic.oss-cn-shanghai.aliyuncs.com/marketplace/Heimdall/%E7%BC%96%E8%BE%91.png"></p><h3 id="Fourth-the-management-interface-Dashboard"><a href="#Fourth-the-management-interface-Dashboard" class="headerlink" title="Fourth, the management interface Dashboard"></a>Fourth, the management interface Dashboard</h3><p>With Heimdall’s intuitive management interface, users can easily manage and configure their dashboards, including adding, deleting and organizing applications, setting permissions and more.</p><p><img src="https://grstatic.oss-cn-shanghai.aliyuncs.com/marketplace/Heimdall/%E7%95%8C%E9%9D%A2.png"></p><h3 id="V-Support-for-adding-multiple-users"><a href="#V-Support-for-adding-multiple-users" class="headerlink" title="V. Support for adding multiple users"></a>V. Support for adding multiple users</h3><p>Heimdall comes with multi-user support, which means multiple users can share the same dashboard and access and edit it based on permission levels, suitable for team sharing and collaboration scenarios.</p><p><img src="https://grstatic.oss-cn-shanghai.aliyuncs.com/marketplace/Heimdall/%E5%A4%9A%E7%94%A8%E6%88%B7.png"></p><h2 id="Installation-Guide"><a href="#Installation-Guide" class="headerlink" title="Installation Guide"></a>Installation Guide</h2><ol><li>Go to <a href="https://hub.grapps.cn/">Cloud Native App Store</a></li><li>Search <a href="https://hub.grapps.cn/marketplace/apps/1324">Heimdall</a></li><li>Go to details, select package type (supported by this app, docker install, ram install)</li><li>Click Install and execute the corresponding command. If you have any questions, please refer to <a href="https://hub.grapps.cn/docs/">Documentation</a> or join the community!</li></ol><h2 id="About-Cloud-Native-Marketplace"><a href="#About-Cloud-Native-Marketplace" class="headerlink" title="About Cloud Native Marketplace"></a>About Cloud Native Marketplace</h2><p>Cloud Native App Market is an app marketplace that aggregates all kinds of open source software. Not only can you use it as your own Helm Chart repository to provide a rich and diverse range of Helm apps, but you can also choose from a wide range of options such as Docker apps, Rainbond app templates, and Xinchuang apps.</p><p>Official website: <a href="https://hub.grapps.cn/">https://hub.grapps.cn/</a></p><p>WeChat Group: Follow <code>Cloud Native Application Marketplace</code> public number to join the technical exchange group.</p>]]></content>
    
    
    <summary type="html">The internet is full of endless information and websites, which can make people feel overwhelmed with information. The power of a navigation page is that it provides users with a focal point, allowing them to quickly access the websites they frequently use without having to enter the URL or perform tedious searches every time.</summary>
    
    
    
    <category term="Programming" scheme="https://www.nablepart.com/categories/Programming/"/>
    
    
    <category term="Heimdall" scheme="https://www.nablepart.com/tags/Heimdall/"/>
    
  </entry>
  
  <entry>
    <title>Cities:Skylines II is a work born under pressure and shadow</title>
    <link href="https://www.nablepart.com/14c4141677ca/"/>
    <id>https://www.nablepart.com/14c4141677ca/</id>
    <published>2023-10-21T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/30/dwkcNbMVq4GT6uv.png" alt="image.png"></p><p><strong>The best city simulation game of the moment, how else can the sequel be upgraded?</strong></p><p><strong>Cities:Skylines II is a work born under pressure and shadow.</strong></p><p>Its predecessor, Cities:Skylines, sold 3.5 million copies in its first year after its launch in 2015, directly stealing the dominance of the genre from SimCity 2013, and sold more than 12 million copies in the following eight years, making it the only “mainstream masterpiece” of the genre. SimCity is the only game that can be called a “mainstream hit”.</p><p>With a 90% positive rating on Steam and over 180,000 positive reviews, Cities: Skylines is another indie classic that has made a name for itself. To say that it’s the best city simulation game of the moment sounds like a bit of an exaggeration, but it’s actually not that surprising - after all, there simply hasn’t been another game that could possibly shake it up even a little bit inside of those eight years.</p><p>So, whether its sequel can surpass the shadow of its “hegemonic” predecessor and bring some fresh fun to the city simulation game has become a topic that can’t be avoided.</p><p>Sims have no characters or storylines, so the only way to take the experience to the next level is to look for meaningful enhancements to the core gameplay and core fun.</p><p>Over the course of the past week, I went ahead and took a deep dive into Cities: Skylines 2. I think the developers have done some pretty amazing things, and on many levels, taken a step further on the shoulders of giants and built an excellent starting point for the future. But at the same time, unfortunately, the game has run into some technical issues as a result.</p><p>As many of you who have been paying attention may know, Cities: Skylines 2 gives a recommended configuration of an RTX 3080, which is indeed a bit of an exaggeration for a simulation game. The RTX 2080Ti that we used for our actual testing was indeed only able to maintain a relatively good gaming experience at 2K resolution and medium image quality at first, and it was only after a subsequent 50GB update was opened up that the performance issues improved.</p><p><img src="https://s2.loli.net/2023/10/30/13pW6TXRIbaVzqi.png" alt="image.png"></p><p>Had the performance issues not dragged it down, the Cities:Skylines II experience would have been really full of fun, and all of it would have been a real improvement over the first generation.</p><p>For example, one of the bigger problems with the first generation was that achievements in the game unlocked so quickly that you could quickly get rich on loans (or slamming toll booths) and then basically build whatever you wanted.</p><p>The second generation added a highly flexible technology tree system that requires players to complete missions to gain experience and technology points, and then act like a real consul to unlock specific technologies according to the characteristics of the map, guiding the city’s next step in development.</p><p>If the player is developing agriculture and forestry, he can unlock watchtowers and firefighting helicopters first. If you want to build a high-tech city, you can also unlock the university, hospital and international airport. These tech trees don’t interfere with each other, so players can focus on creating the city features they have in mind without having to be distracted by going out and solving other problems.</p><p><img src="https://s2.loli.net/2023/10/30/ErTKP1Iuo5jSxDm.png" alt="image.png"></p><p>In Cities: Skylines 2, the game’s most central citizen AI has changed radically. Citizens’ actions are much more complex. In addition to commuting distance affecting the course of action, time and cost become important considerations. In the event of congestion, citizens will actively redirect themselves to a quicker route. If the destination is not good for parking, it will also affect the citizens’ willingness to travel.</p><p>In the previous game, citizens would always tend to use the transportation route with the shortest distance, making it very difficult to solve the transportation problem in the middle and late game. Later, players simply broke the mold and developed the gameplay of building toll booths all over the core main roads.</p><p>And in the second generation, the toll station play is not playable, several parallel arterials can also be seen on the traffic diversion. Even more interestingly, people driving on the road will also have car accidents. If an area’s roads lack maintenance, the probability of car accidents on the road goes up. Zooming in closer, citizens will walk their dogs, play in parks, and feel like there are little stories going on around the edges. All of this makes the cars and pedestrians in the city feel more alive and less like a perfectly functioning piece of programming inside a sandbox forever.</p><p><img src="https://s2.loli.net/2023/10/30/ROD3Nspt4H1FaWi.png" alt="image.png"></p><p>The buildable area of the city in the game has also been vastly improved. One of the features of the series is the ability to gradually unlock map blocks to expand the buildable area. In the first generation of the original game, a city could have up to nine blocks, which equaled about 35 square kilometers.</p><p>By Cities: Skylines 2, individual blocks had shrunk to one-eighth to one-ninth of their previous size, but the total number of purchasable blocks skyrocketed to more than 440, bringing the game’s native playable area to nearly 160 square kilometers alone, almost to five times the size of its predecessor. At the same time, the increased number of plots also allows for more flexibility in city planning, making it less likely that you’ll pay a lot of money for a plot of land only to have a large portion of it sit abandoned.</p><p><img src="https://s2.loli.net/2023/10/30/diYZSzqXK3vGflF.png" alt="image.png"></p><p>Of course, the first generation had mods that allowed players to unlock all 25 blocks, bringing the playable area to 96 square kilometers. But in this state, if the player’s city is built a bit more complex and full, a highly configured computer will struggle to run it, and loading the map using a high-speed SSD may take several minutes, even ten. And in the middle and late game, when the land parcels are opened to 200-300, and the population of the city reaches 40-50,000, the frame rate drop and lag will start to become very obvious.</p><p>The first generation of the game was known for its excellent road network construction system, and the second generation improves on that. There are more detailed and effective hints for auxiliary lines, a fine selection of which lane corresponds to which when connecting two different widths of road, and even a conscientious feature such as one-click intersections into traffic circles, making it easy for players to build complex and beautifully crafted road networks without relying on any plug-ins.</p><p><img src="https://s2.loli.net/2023/10/30/X16YMd9Pc5EGz2F.png" alt="image.png"></p><p>One-click traffic circles really are great inventions</p><p>Generation 2’s power lines and water and sewer pipes are integrated into the roads, solving a big problem. No matter how far apart two planning areas are, as long as they are connected by a road they can deliver power and water to each other, no more building separate generators for pumping stations far away in the middle of nowhere or pulling some ugly wires across the city center in the early stages of the game.</p><p>The only problem with this system at the moment is that highways and bridges can cut off this transmission pipeline, making it slightly tricky to connect, but that’s a matter for a subsequent patch.</p><p><img src="https://s2.loli.net/2023/10/30/7hAgvHDFTEz9Ixb.png" alt="image.png"></p><p>The administrative divisions within the game finally have a more practical use, with services like police stations, fire stations, and hospitals serving designated administrative areas in addition to their neighborhoods. This means that at the beginning of the game, when the population is small, several distant neighborhoods can share the same hospital or fire station, greatly reducing the cost of services.</p><p><img src="https://s2.loli.net/2023/10/30/83ZbACmudOhkGcI.png" alt="image.png"></p><p>Most facilities also support modular upgrades and expansions, such as adding garages to small fire stations, water purification facilities to sewage treatment plants, and integrated cargo terminals to airports, reducing the need for several buildings of the same function at the end of the game, and making the size of the city seem much more reasonable.</p><p><img src="https://s2.loli.net/2023/10/30/bS8kQFvHYN4DfuA.png" alt="image.png"></p><p>The radio station inside the game is also quite interesting, with DJs talking to each other and even commenting on the development of the city and spouting off. It’s a small sense of accomplishment when I plan out roads and business districts, and then hear the DJ say, “Our businesses are thriving, and it’s great that several stores have opened. There have been many, many more subtle changes that I can’t even begin to list.</p><p>Compared to its predecessor, which was in development for eight years and underwent several major updates, Cities: Skylines 2 will inevitably shrink a bit in terms of content, with a lot of what clearly seems to be a place reserved for subsequent DLC. But the amount of content presented now counts as an honest effort for a new game.</p><p><img src="https://s2.loli.net/2023/10/30/qTwVSCp47XH3Wlj.png" alt="image.png"></p><p>This amount of content also somewhat explains why the game performs so poorly. Improved AI, more road events, more expansive maps, more complex modular buildings, each of these gradually add up to a very configuration-intensive game. It’s something the development team hasn’t shied away from, admitting that it was a choice between releasing an underpolished product and skipping a deadline.</p><p>For most gamers, Cities: Skylines 2 has made a number of solid improvements on the excellent foundation of the previous generation, and would probably have taken over the throne of its predecessor had it not been bogged down by performance issues. As for the game’s performance issues, the development team will surely find a way to fix them, because it really won’t work if they don’t.</p><p>Before this article was written, we received a 50GB update package that, according to the developer’s alert, fixes a large number of operational and optimization issues that exist in the current version. Limited by time, we didn’t do too much in-depth testing, but the average frame rate under the same screen settings increased by about 30%, and there was a lot less lagging for no apparent reason, which is a pretty positive sign.</p><p>This efficiency in updating and iterating in a timely manner also gives me more confidence in the attitude of the development team. The current discrepancy between Generation 2’s plans for the Creative Workshop and Mod ecosystem and the opinions of the player community has caused some concern, but I’m confident that a developer like Colossal Games, who has always valued community culture, will respect this hard-won sequel - after all, in these days, high-development-specification simulations games are becoming increasingly rare.</p>]]></content>
    
    
    <summary type="html">The best city simulation game of the moment, how else can the sequel be upgraded?</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Simulation Games" scheme="https://www.nablepart.com/categories/Simulation-Games/"/>
    
    
    <category term="Simulation Games" scheme="https://www.nablepart.com/tags/Simulation-Games/"/>
    
    <category term="Cities:Skylines II" scheme="https://www.nablepart.com/tags/Cities-Skylines-II/"/>
    
    <category term="Steam" scheme="https://www.nablepart.com/tags/Steam/"/>
    
  </entry>
  
  <entry>
    <title>AIGC是否会阻碍教育？除非……</title>
    <link href="https://www.nablepart.com/56d82b5adbaa/"/>
    <id>https://www.nablepart.com/56d82b5adbaa/</id>
    <published>2023-10-21T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="AIGC是否会阻碍教育？除非……"><a href="#AIGC是否会阻碍教育？除非……" class="headerlink" title="AIGC是否会阻碍教育？除非……"></a>AIGC是否会阻碍教育？除非……</h2><p>最近我有机会与LEAPS的创始人Joey讨论当前热门议题AIGC。</p><h3 id="Joey-Lin"><a href="#Joey-Lin" class="headerlink" title="Joey Lin"></a><strong>Joey Lin</strong></h3><blockquote><p>#剑桥大学核能博士<br>#LEAPS国际研究中心创始人<br>#2018年福布斯亚洲30强<br>#在教育行业拥有10年的创业经验</p></blockquote><p><img src="https://cdn-images-1.medium.com/max/2000/1*x0nK1cj64MVAmnDaDYtfUg.png"></p><p>以下是Joey分享的内容。</p><p><em><strong>问题1</strong></em></p><h3 id="你认为AIGC对教育会有什么影响？"><a href="#你认为AIGC对教育会有什么影响？" class="headerlink" title="你认为AIGC对教育会有什么影响？"></a>你认为AIGC对教育会有什么影响？</h3><blockquote><p>AIGC的出现将打破传统教育。在以前的教育关系中，学习者的信息远远低于教师，但现在人工智能的出现使得每个学习者拥有无限的信息。这类似于从农业时代到蒸汽时代的过渡，蒸汽机如今正在一个接一个地出现。两者之间的动态已经改变了。</p></blockquote><p><em><strong>问题2</strong></em></p><h3 id="是否可能找到解决学生滥用AIGC问题的方法？"><a href="#是否可能找到解决学生滥用AIGC问题的方法？" class="headerlink" title="是否可能找到解决学生滥用AIGC问题的方法？"></a>是否可能找到解决学生滥用AIGC问题的方法？</h3><blockquote><p>由于AIGC的存在，对于教师来说，要抓住每个作弊者是非常具有挑战性的，学生将遵守学校的政策。<br>突然出现了一个无所不知的上帝，他免费并愿意为每个学生服务，对于如教育这样传统的领域来说，这将是一个巨大的挑战，而且在这一点上可以预料到会爆发大量的教育不公平。<br>由于教师非常忙碌，教育系统完全依赖人力，任何政策的实施都非常缓慢。特别是如果需要教师学习新事物的时间。<br>想象一下，一个老师布置学生在一个月内提交一篇3000字的论文。现在，学生只需花费十秒钟的时间点击一次即可创建带有所有数据、分析和文献整理的论文。在课堂上认真写作的学生因为会出错只能得到B，而懒惰的学生则获得A*。</p></blockquote><p><img src="https://cdn-images-1.medium.com/max/2000/1*VDSYJNn8de1vYBjsSfJgCQ.png"></p><blockquote><p>令人恐惧的是，我在这个领域工作了十多年，我相信这是不可避免的。也许，人类历史上第一次，教育和全球发展将成反比的关系。<br>通过整个考试体系的学生将无法适应现代社会，因为现代社会已经完全采用了AIGC。<br>或许前面的情况太过严厉。还有一些缓和的改进，但这些调整需要时间，在两三年的转型过程中不可避免地会出现显著的教育不平衡，即那些能够作弊进入更好的机构的人。</p></blockquote><p><em><strong>问题3</strong></em></p><h3 id="在LEAPS开发时，你做了哪些预测？"><a href="#在LEAPS开发时，你做了哪些预测？" class="headerlink" title="在LEAPS开发时，你做了哪些预测？"></a>在LEAPS开发时，你做了哪些预测？</h3><blockquote><p>当LEAPS于2022年12月首次成立时，我们做出了三个预测。<br>教育评估体系将发生重大变化。</p></blockquote><ol><li>从基于结果的评估转向基于过程的评估</li><li>从单一得分测试转向真正的多次测试</li><li>提问技能将比以往更为重要。<blockquote><p>在三月份，LEAPS将推出一款先进的教师和学校工具，可以将教学成本降低20至50倍，并帮助教师大幅减少批改作业所需的时间。请继续关注我们的实时过程评估系统的开发，并为学生提供类似游戏的学习工具。</p></blockquote></li></ol><p>在数字时代，现代信息技术为教育和教学带来了巨大的创新和改革推动力，从教学形式到体验产生了深远的影响，使得教师的教学和学生的学习更加高效和精确！我们还将继续关注人工智能的发展趋势和方向，并对如何利用人工智能进一步提高教育质量、创新教学模式和增强学习体验进行深入研究。</p>]]></content>
    
    
    <summary type="html">AIGC对教育的影响及解决方案。教育将面临变革，学生滥用AIGC带来挑战，但也有教育评估改革的预测。了解教育与人工智能的关系及未来发展方向。</summary>
    
    
    
    
    <category term="人工智能" scheme="https://www.nablepart.com/tags/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/"/>
    
    <category term="AIGC" scheme="https://www.nablepart.com/tags/AIGC/"/>
    
    <category term="教育变革" scheme="https://www.nablepart.com/tags/%E6%95%99%E8%82%B2%E5%8F%98%E9%9D%A9/"/>
    
    <category term="学生滥用" scheme="https://www.nablepart.com/tags/%E5%AD%A6%E7%94%9F%E6%BB%A5%E7%94%A8/"/>
    
    <category term="教育评估" scheme="https://www.nablepart.com/tags/%E6%95%99%E8%82%B2%E8%AF%84%E4%BC%B0/"/>
    
    <category term="教育发展" scheme="https://www.nablepart.com/tags/%E6%95%99%E8%82%B2%E5%8F%91%E5%B1%95/"/>
    
  </entry>
  
  <entry>
    <title>It&#39;s 3202, why isn&#39;t SSR as popular as expected?</title>
    <link href="https://www.nablepart.com/829202d31b89/"/>
    <id>https://www.nablepart.com/829202d31b89/</id>
    <published>2023-10-20T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.786Z</updated>
    
    <content type="html"><![CDATA[<p>A study found that every one-second increase in website loading time will result in a 10% loss of users. In order to improve the page’s second open rate, people from all walks of life continue to explore the optimization strategy, only in the browser field under the optimization has been unable to meet the requirements of the ultimate, we began to explore the direction of the server-side, and once let the [Server Side Rendering] this ancient concept of “red”, and hype is hot.</p><p>Server-side rendering abbreviated SSR, full name Server Side Rendering, as the name implies is the work of rendering on the Server side. This approach is not only conducive to the first screen rendering, to improve the first screen response speed of SPA applications, but also to facilitate search engine crawling, which is conducive to SEO optimization. However, by 2023, SSR is not as popular as expected.</p><p>Some commented that most of the reasons for using SSR is to serve SEO, but now search engines have kept pace with the development of SPA written in a framework support is also good, so the need for SSR is not so great. There are also people who think that SSR is a pseudo-requirement, and that business logic and controllers load just as fast when they are separated.</p><p>But there are also comments that there are still a large number of users who can’t have a good experience when accessing web pages because of the network environment or device conditions, so if we want to improve the experience of this part of the user, then SSR is an indispensable way to do so.</p><p>What is the real situation? What are the reasons that have prevented SSR from becoming the dominant development paradigm on the Web? Is this approach outdated in today’s environment? What kind of business scenarios are more suitable for SSR? Open Source China invited two front-end leaders to listen to their views.</p><ul><li><p>Liu Kui, community nickname kuitos. Liu Kui, community nickname kuitos, front-end engineer of Alipay Experience Technology Department, author of the open source micro-front-end program qiankun, is currently responsible for web infrastructure R&amp;D related work in Ant.</p></li><li><p>Liu Yong, community nickname skypig, head of Node.js Infra in a big factory, core developer of EggJS &#x2F; CNPM.</p></li></ul><h2 id="I-SSR-not-a-pseudo-requirement"><a href="#I-SSR-not-a-pseudo-requirement" class="headerlink" title="I. SSR, not a pseudo-requirement"></a>I. SSR, not a pseudo-requirement</h2><p><strong>Q1: In your experience, what types of projects and scenarios are more commonly used SSR? Can you give some examples?</strong></p><p><strong>Liu Kui:</strong> SSR is more commonly used for such websites that are very sensitive to first screen performance or have strong SEO requirements, such as:</p><ul><li><p>E-commerce platforms: faster first screen rendering allows users to see product information faster, increasing the conversion rate of purchase.</p></li><li><p>Campaign pages: SSR can effectively improve the business results of marketing campaigns.</p></li><li><p>Portals: content-based sites usually have a stronger claim on SEO</p></li></ul><p><strong>Q2: From your actual experience, what do you think are the advantages of SSR compared to CSR (Client-side rendering) mode?</strong></p><p><strong>Liu Kui:</strong> From my personal experience, the biggest advantage is still in the first screen experience, SSR mode HTML loading process users can see the effective page content, this is basically CSR is difficult to do.</p><p><strong>Q3: Nowadays search engine already supports rendering, do you think there is still a need to use SSR because of SEO?</strong></p><p><strong>Liu Kui:</strong> Due to well-known reasons, domestic search engines do not support SPA type of application is not good, if you want your site can be better indexed by the crawler, basically still need to use SSR (or SSR variants) program.</p><p><strong>Q4: Some people believe that SSRis a pseudo-requirement to improve the first screen rendering performance, if the back-end service business logic and controller separation, the controller is divided into view controller and interface controller, call the same business logic. The first time you open the page, the front-end JavaScript load the data rendered on the page, and then request the interface to get the data when the user interacts. This solution is much better than SSR, which is in a performance hurry. How do you rate it?</strong></p><p><strong>Liu Kui:</strong> This solution is still CSR in nature and cannot solve the problem native to CSR solutions: that is, the user must wait until the JS download is complete -&gt; initiate an interface request -&gt; JS gets the data and renders the page before they can see the valid content. In the more demanding network environment and user device conditions, this problem will be more obvious.</p><p><strong>Liu Yong:</strong> according to the team’s infrastructure maturity and business scenarios to do technical selection, these 2 programs are not absolutely superior or inferior, nor is it absolutely cut off, they can be combined into a program through the front-end engineering.</p><h2 id="Second-SSR-want-to-red-a-little-difficult"><a href="#Second-SSR-want-to-red-a-little-difficult" class="headerlink" title="Second, SSR, want to red a little difficult"></a>Second, SSR, want to red a little difficult</h2><p><strong>Q5: In the current situation, SSRand did not become the mainstream Web development model, you think the obstacles are?</strong></p><p><strong>Liu Kui:</strong> I think there are mainly these types of reasons:</p><ul><li><p><strong>Technical complexity:</strong> SSR requires server-side rendering and integration with front-end frameworks, which requires more technical knowledge for developers.</p></li><li><p><strong>SSR Bringing additional development and maintenance costs:</strong> Relative to CSR, SSR solutions require the front-end to pay extra attention to server-side related development and operation and maintenance, such as how to write higher performance server-side rendering logic, how to deal with potential memory leaks, variable contamination, and other isolation issues, and how to do SSR disaster recovery (fallback to CSR in the event of a SSR failure) etc. All of these require extra resources and time investment from the team.</p></li><li><p><strong>Scenario Match:</strong> A large number of services in China are distributed through small programs and APPs, and there are relatively few products with pure Web technology stacks, which is very different from the scenarios in foreign countries.</p></li></ul><p><strong>Liu Yong:</strong> First of all, SSR requires server resource costs, and in the context of cost reduction and efficiency, it will need to be combined with some infrastructure such as Serverless or edge computing to find a balance. At the same time, since it is the server side, there are certain requirements for operation and maintenance capabilities, and there are certain requirements for the technical accumulation of the front-end team.</p><p>Secondly, if the packaging and maintenance of the framework is not done well, it is very common for business students to write SSRs that are prone to memory leaks. Moreover, the current front-end framework has not been optimized for SSR scenarios, so if the first screen display is fast, but then you have to download the huge Bundle file, so the user interaction time is too slow, it is not worth it.</p><p>Finally, the evolution path problem, such as the ant side, they have been with the offline package of the upstream and downstream infrastructure are done very well, APP side, network side of the brother team to cooperate with the polishing. This model will have some defects, such as offline packets too much business competition, but on the first screen performance, SSR is not necessarily much better than it, then let them switch to SSR there will be no small resistance.</p><p>**Q6: There are comments that SSR is too expensive to develop and maintain, and they are turning to ** <strong>CSR</strong> ** Can CSR achieve the same effect as SSR? Are there any specific operational programs? **</p><p><strong>Liu Yong:</strong> From the key point of the first screen performance, if CSR does not do some optimization, at least 3 serial HTTP requests, the first screen time is certainly not as good as SSR (interoperability time is not necessarily).</p><p>However, there are many corresponding solutions, such as ServiceWorker, offline packages and so on.</p><p><strong>Liu Kui:</strong> From the point of view of first-screen rendering speed alone, CSR can be optimized in the following way if it wants to achieve the similar effect of SSR:</p><ol><li><p><strong>First screen page static resource optimization:</strong> through code cutting &amp; lazy loading and other means, to ensure that the first screen needs JS&#x2F;CSS is a minimized version, and through inlining and other ways to directly hit the HTML, to reduce the first screen rendering needs of network requests;</p></li><li><p>**Caching and **** preloading: Use client-side caching and preloading and other mechanisms to improve the speed of the second visit;</p></li><li><p><strong>Use lighter weight frameworks:</strong> Choose lighter weight front-end frameworks, so as to reduce the JS volume of the first screen and improve the loading speed;</p></li><li><p><strong>Optimize the response speed of key interfaces:</strong> Optimize the response speed of interfaces for key content needed on the first screen to ensure that the front-end can render the page faster.</p></li></ol><p>However, if there are additional SEO requirements, it may be difficult to achieve the same effect with simple CSR.</p><p><strong>Q7: How much would it cost to switch the original application directly to an SSR integrated application? What would be the challenges for the development team?</strong></p><p><strong>Liu Kui:</strong> The costs and challenges are as follows:</p><ol><li><p>**Application transformation cost: Most of the applications can not be directly run in the server-side environment, basically need to do a certain degree of transformation, such as eliminating the first screen rendering code in the dependence on the window, location and other browser-specific APIs, to build a JS for server-side runtime and so on.</p></li><li><p><strong>SSR Function R&amp;D and O&amp;M Challenges:</strong> Teams with rich front-end and server-side development experience are rare in most companies. As mentioned earlier, SSR brings additional server-side development and O&amp;M challenges, which also need to be considered by the front-end team.</p></li></ol><h2 id="III-Maybe-SSR-CSR-will-be-the-new-direction-in-the-future"><a href="#III-Maybe-SSR-CSR-will-be-the-new-direction-in-the-future" class="headerlink" title="III. Maybe, SSR + CSR will be the new direction in the future?"></a>III. Maybe, SSR + CSR will be the new direction in the future?</h2><p><strong>Q8: Now some sites use first screen server-side rendering, that is, for the user to start opening the page using the server-side rendering, which ensures that the rendering speed, and other pages using client-side rendering, so that the front and back-end separation is completed. Do you think this would be a more perfect solution that incorporates the advantages of both?</strong></p><p><strong>Liu Kui:</strong> Yes, this is also the current best practice in the community, which can well retain the advantages of SSR and SPA applications.</p><p><strong>Liu Yong:</strong> This is actually many years ago there are related practices, such as when Yunlong in the UC Scrat Pagelet is a similar practice, and even at that time to do the subsequent page is also through the server-side local rendering, on-demand update of the front-end page of the stage.</p><p>This approach in the industry has also seen some more recent practice: developers are very natural to write logic, do not care about what separation is not separation of things, in the front-end engineering layer of automatic splitting, SSG + SSR + CSR, some can be built statically directly in the construction stage of the processing, some can be rendered in the server-side service-side, the rest of the non-modest components directly rendered out of the front-end. All of these can be done, provided that the front-end engineering piece of the infrastructure is perfect enough, the R&amp;D model is convergent enough.</p><p>As a final reminder, most SSR practices that I know of generally also block a short-lived CDN in the front, and then do thousands of modifications and subsequent business logic through CSR.</p><p><strong>Q9: How do you see the future development of SSR? Will it be phased out with hardware upgrades, or will it become more and more popular with technology updates?</strong></p><p><strong>Yong Liu:</strong> Optimization ideas are not obsolete, maybe one day we are familiar with the programming interface of SSR has changed, for example, when the SSR was using nunjucks, ejs and other templates, and now it is react, vue. the future will also be a new technology, but it is very likely to belong to the SSR of a practice model.</p><p><strong>Liu Kui:</strong> In my experience, most of the time, new technology solutions will try to squeeze more from the hardware to get a better interactive experience, so there will be relatively “low-end” devices at any time, and this should not be solved (laughs).</p><p>In my opinion, the most important landing cost of SSR is still in the R&amp;D and O&amp;M of the server side, which is a big burden for the front-end team of most companies, and then the ROI is not high, leading to difficulties in landing SSR. However, with the development of Serverless, there are many almost “zero operation and maintenance” Serverless programs, which can greatly reduce the front-end team’s operation and maintenance costs. At the same time, from the community trend, in recent years, the popular front-end frameworks are embracing Edge and SSR, such as Next.js, remix-run, Qwik, Astro, Fresh, etc. At the same time, React and other libraries are also embracing Edge and SSR. At the same time, libraries such as React have introduced streaming SSR capabilities for better performance performance. Through the integration and iteration of these framework technologies, not only can significantly reduce the R&amp;D cost of front-end engineers developing SSR applications, but also further improve the performance effect of traditional SSR.</p><p>From the current trend, I think SSR will become more and more popular with the reduction of R&amp;D and O&amp;M costs.</p><p><strong>Q10: Combined with your project experience, how would you evaluate SSR this model?</strong></p><p><strong>Liu Yong:</strong> Looking at the historical evolution of the front-end, it is SSR → CSR → SSR, which at a rough glance seems to be driving history backwards, but in reality it is not.</p><p>For example, when the front-end HTML + CSS + JS are all-in-one single-file way, because the front-end at that time there is no compilation ability can only be written together; with the evolution of front-end engineering, the development of the development period is split into a multi-file way of organizing the construction of the automated processing has become the mainstream; and then further appeared similar to the single-file way of Vue SFC, this is a retrogression? No, it’s not, but as the infrastructure improves, the user programming interface can be more intuitive, leaving things like performance and deployment to the tools.</p><p>So I think there are real scenarios for the SSR model, but at this stage, I think there are still a lot of practical performance issues and engineering problems that need to be solved in order to land better.</p><p><strong>Liu Kui:</strong> Although CSR can get a better first-screen experience, there is an obvious performance ceiling due to the functionality of the user’s device. SSR can better utilize edge computing (ESR), streaming rendering and other server-side capabilities to effectively improve the performance ceiling, and will be an effective weapon for Web applications to improve the performance of the first screen most of the time.</p><p>Of course, every project and team has different characteristics and goals, and you need to consider various factors when choosing a development model.</p>]]></content>
    
    
    <summary type="html">A study found that every one-second increase in website loading time will result in a 10% loss of users. In order to improve the page&#39;s second open rate, people from all walks of life continue to explore the optimization strategy, only in the browser field under the optimization has been unable to meet the requirements of the ultimate, we began to explore the direction of the server-side, and once let the [Server Side Rendering] this ancient concept of &quot;red&quot;, and hype is hot.</summary>
    
    
    
    <category term="Programming" scheme="https://www.nablepart.com/categories/Programming/"/>
    
    
    <category term="SSR" scheme="https://www.nablepart.com/tags/SSR/"/>
    
    <category term="Xiaomi" scheme="https://www.nablepart.com/tags/Xiaomi/"/>
    
    <category term="Server Side Rendering" scheme="https://www.nablepart.com/tags/Server-Side-Rendering/"/>
    
    <category term="Front End" scheme="https://www.nablepart.com/tags/Front-End/"/>
    
  </entry>
  
  <entry>
    <title>互联网龙头间的竞争与博弈:腾讯、百度与阿里巴巴</title>
    <link href="https://www.nablepart.com/194a32d5f16f/"/>
    <id>https://www.nablepart.com/194a32d5f16f/</id>
    <published>2023-10-19T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>在当今的科技行业,腾讯、百度和阿里巴巴是中国最知名的互联网公司之一。然而,每个公司都面临着不同的挑战和风险,因此人们经常讨论哪家公司最有可能先倒下。本文将从多个角度探讨这个问题,并分析这三家公司的核心竞争力、风险因素、公司基因、技术趋势和时代变化。希望通过深入研究,为读者提供一个全面的视角,帮助他们更好地理解这个问题。</p><h2 id="腾讯-社交巨头"><a href="#腾讯-社交巨头" class="headerlink" title="腾讯:社交巨头"></a>腾讯:社交巨头</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231019131539.png"></p><p>腾讯是中国最大的社交媒体公司之一,其核心产品微信已经成为人们生活中不可或缺的一部分。微信的用户基数庞大,拥有强大的社交关系链,这是腾讯的主要竞争优势之一。社交软件的护城河不在于软件本身的先进性,而在于用户建立的独特社交关系。如果微信突然消失,用户将失去与许多好友和客户的联系,这将对他们产生巨大的影响。长期来看,很难想象有哪个APP能够让用户花费大量时间和精力重新建立起自己的社交关系链。因此,腾讯在社交领域拥有广阔的护城河。</p><p>然而,腾讯并不是所有社交产品都能够取得成功。早期的QQ经历了衰落,许多用户失去了与好友和同学的联系方式。这说明即使是拥有庞大用户基数的社交产品,也需要不断创新和适应市场变化。<strong>腾讯需要保持对社交行业的敏锐洞察力,关注潮流变化,及时调整产品形态,以更好地满足用户社交需求的变化。</strong> 只有做到这一点,腾讯才能保持其在社交领域的领先地位。</p><h2 id="百度-搜索引擎的挑战"><a href="#百度-搜索引擎的挑战" class="headerlink" title="百度:搜索引擎的挑战"></a>百度:搜索引擎的挑战</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231019131622.png"></p><p>百度是中国最大的搜索引擎之一,曾经在互联网行业占据主导地位。然而,随着移动互联网时代的到来,信息被封存在各个APP中,百度的搜索引擎地位受到了挑战。移动互联网时代,用户更倾向于使用APP内部的搜索功能,而不是通过百度进行搜索。这使得百度在移动互联网领域失去了基础设施级别的地位,对其造成了困扰。</p><p>百度曾试图通过投资和开发移动互联网产品来扩大其业务范围。然而,这些努力并没有取得预期的成功。例如,百度外卖投入了大量资源,但最终没有取得预期的回报。此外,百度旗下的贴吧虽然在一段时间内拥有全网最大的流量,但由于百度对流量和盈利的重视程度不同,贴吧最终被急功近利的策略所破坏。</p><p><strong>百度需要调整搜索业务的商业模式,不再仅仅依靠流量变现,而要通过算法创新真正提升搜索体验。</strong> 同时,百度还应深入移动互联网本质,寻找突破口,增强在移动互联网世界的地位。只有做到这两点,百度才能重塑搜索引擎的价值,保持竞争力。</p><h2 id="阿里巴巴-电商巨头"><a href="#阿里巴巴-电商巨头" class="headerlink" title="阿里巴巴:电商巨头"></a>阿里巴巴:电商巨头</h2><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231019131711.png"></p><p>阿里巴巴是中国最大的电子商务公司之一,其核心业务是淘宝。淘宝拥有庞大的用户群体和高销售额,但也面临着来自京东和拼多多等竞争对手的压力。虽然淘宝的用户规模和销售额很高,但如果一不小心踏空,就有可能被竞争对手抓住机会分食市场份额。</p><p>阿里巴巴的护城河并不在于用户体量的多少,而在于用户对其的依赖程度。如果没有了淘宝,用户可以顺利地将商户和买家迁移到京东或拼多多上,基本上没有任何影响。此外,淘宝的支付场景支持是支付宝地位的重要支撑,如果失去了淘宝,支付宝的地位可能会迅速下降。</p><p><strong>阿里巴巴需要提升平台创新能力,如通过供应链、物流等方面的创新加强平台固定性,同时拓展金融、SaaS等场景,降低对淘宝的依赖。</strong>只有提升核心竞争力,阿里巴巴才能保持电商巨头的地位。</p><h2 id="比较和总结"><a href="#比较和总结" class="headerlink" title="比较和总结"></a>比较和总结</h2><p>腾讯、百度和阿里巴巴是中国互联网行业的巨头,各自面临着不同的挑战和风险。<strong>腾讯需要关注社交变化,百度要转型提升搜索体验,阿里要通过创新加强平台固定性。</strong> 只有抓住自身优势、直面行业变革,三家公司才能保持领先地位。本文通过分析三家公司的现状和面临的问题,希望可以帮助读者全面理解互联网公司的发展趋势和面临的挑战。</p>]]></content>
    
    
    <summary type="html">这是一篇分析和比较腾讯、百度、阿里巴巴三大互联网公司的文章。尽管每个公司都面临着不同的挑战，但它们在各自领域都有强大的优势和护城河。</summary>
    
    
    
    <category term="行业分析" scheme="https://www.nablepart.com/categories/%E8%A1%8C%E4%B8%9A%E5%88%86%E6%9E%90/"/>
    
    
    <category term="百度" scheme="https://www.nablepart.com/tags/%E7%99%BE%E5%BA%A6/"/>
    
    <category term="腾讯" scheme="https://www.nablepart.com/tags/%E8%85%BE%E8%AE%AF/"/>
    
    <category term="阿里巴巴" scheme="https://www.nablepart.com/tags/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4/"/>
    
  </entry>
  
  <entry>
    <title>JavaScript数组常用方法全面指南</title>
    <link href="https://www.nablepart.com/c4c4729148a1/"/>
    <id>https://www.nablepart.com/c4c4729148a1/</id>
    <published>2023-10-18T11:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>数组是JavaScript中最常用也最重要的数据结构之一。合理地利用数组的各种操作方法,可以大大简化代码,提高开发效率。</p><p>本文将全面介绍JavaScript中数组的各种常用方法,包括数组元素的增、删、改、查,以及数组的排序、搜索、迭代等操作。通过学习本文,你将系统地掌握数组的用法,在处理数组数据时能够灵活应用,在前端开发中更得心应手。</p><p>下面让我们正式开始JavaScript数组方法的学习之旅吧!</p><h2 id="一、增删改方法"><a href="#一、增删改方法" class="headerlink" title="一、增删改方法"></a>一、增删改方法</h2><p>数组操作是我们经常需要用到的功能,常见的增删改查方法可以帮助我们对数组进行操作。下面将介绍五种常见的增删方法。</p><h3 id="push"><a href="#push" class="headerlink" title="push()"></a>push()</h3><p>push()方法用于向数组末尾添加一个或多个元素,并返回数组的最新长度。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>];</span><br><span class="line"><span class="keyword">let</span> count = colors.<span class="title function_">push</span>(<span class="string">&quot;green&quot;</span>,<span class="string">&quot;blue&quot;</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(count); <span class="comment">// 3 </span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;red&quot;,&quot;green&quot;,&quot;blue&quot;]</span></span><br></pre></td></tr></table></figure><h3 id="unshift"><a href="#unshift" class="headerlink" title="unshift()"></a>unshift()</h3><p>unshift()方法用于向数组开头添加一个或多个元素,并返回新的数组长度。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>];</span><br><span class="line"><span class="keyword">let</span> count = colors.<span class="title function_">unshift</span>(<span class="string">&quot;green&quot;</span>,<span class="string">&quot;blue&quot;</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(count); <span class="comment">// 3</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;green&quot;,&quot;blue&quot;,&quot;red&quot;]</span></span><br></pre></td></tr></table></figure><h3 id="pop"><a href="#pop" class="headerlink" title="pop()"></a>pop()</h3><p>pop()方法用于删除数组的最后一项,并返回被删除的项。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>,<span class="string">&quot;green&quot;</span>,<span class="string">&quot;blue&quot;</span>];</span><br><span class="line"><span class="keyword">let</span> item = colors.<span class="title function_">pop</span>();</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(item); <span class="comment">// blue</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;red&quot;,&quot;green&quot;]</span></span><br></pre></td></tr></table></figure><h3 id="shift"><a href="#shift" class="headerlink" title="shift()"></a>shift()</h3><p>shift()方法用于删除数组的第一项,并返回被删除的项。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>,<span class="string">&quot;green&quot;</span>,<span class="string">&quot;blue&quot;</span>]; </span><br><span class="line"><span class="keyword">let</span> item = colors.<span class="title function_">shift</span>();</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(item); <span class="comment">// red</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;green&quot;,&quot;blue&quot;]</span></span><br></pre></td></tr></table></figure><h3 id="splice"><a href="#splice" class="headerlink" title="splice()"></a>splice()</h3><p>splice()方法可以在任意位置对数组进行增删改操作。它接受三个参数:开始位置、要删除元素的数量、要插入的任意多个元素,并返回删除元素的数组。这个方法会对原数组产生影响。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>, <span class="string">&quot;green&quot;</span>, <span class="string">&quot;blue&quot;</span>];</span><br><span class="line"><span class="keyword">let</span> removed = colors.<span class="title function_">splice</span>(<span class="number">1</span>, <span class="number">1</span>, <span class="string">&quot;red&quot;</span>, <span class="string">&quot;purple&quot;</span>); </span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;red&quot;,&quot;red&quot;,&quot;purple&quot;,&quot;blue&quot;]</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(removed); <span class="comment">// [&quot;green&quot;]</span></span><br></pre></td></tr></table></figure><h2 id="二、搜索和位置方法"><a href="#二、搜索和位置方法" class="headerlink" title="二、搜索和位置方法"></a>二、搜索和位置方法</h2><p>在处理数组时,我们经常需要查找特定的元素,JavaScript提供了一些方法来帮助我们实现这个目标。 </p><h3 id="indexOf"><a href="#indexOf" class="headerlink" title="indexOf()"></a>indexOf()</h3><p>indexOf()方法返回数组中第一次出现指定元素的索引,如果没有找到该元素则返回-1。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">indexOf</span>(<span class="number">9</span>)); <span class="comment">// -1</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">indexOf</span>(<span class="number">4</span>)); <span class="comment">// 3</span></span><br></pre></td></tr></table></figure><h3 id="lastIndexOf"><a href="#lastIndexOf" class="headerlink" title="lastIndexOf()"></a>lastIndexOf()</h3><p>lastIndexOf()方法返回数组中最后一次出现指定元素的索引,如果没有找到该元素则返回-1。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];  </span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">lastIndexOf</span>(<span class="number">9</span>)); <span class="comment">// -1</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">lastIndexOf</span>(<span class="number">4</span>)); <span class="comment">// 5</span></span><br></pre></td></tr></table></figure><h3 id="find"><a href="#find" class="headerlink" title="find()"></a>find()</h3><p>find()方法返回数组中满足条件的第一个元素。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> people = [</span><br><span class="line">  &#123; <span class="attr">name</span>: <span class="string">&quot;张三&quot;</span>, <span class="attr">age</span>: <span class="number">27</span> &#125;,</span><br><span class="line">  &#123; <span class="attr">name</span>: <span class="string">&quot;李四&quot;</span>, <span class="attr">age</span>: <span class="number">29</span> &#125;  </span><br><span class="line">];</span><br><span class="line"><span class="keyword">let</span> result = people.<span class="title function_">find</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item.<span class="property">age</span> &lt; <span class="number">28</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(result); <span class="comment">// &#123; name: &quot;张三&quot;, age: 27 &#125;</span></span><br></pre></td></tr></table></figure><h3 id="findIndex"><a href="#findIndex" class="headerlink" title="findIndex()"></a>findIndex()</h3><p>findIndex()方法返回数组中满足条件的第一个元素的索引,如果没有找到则返回-1。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> people = [</span><br><span class="line">  &#123; <span class="attr">name</span>: <span class="string">&quot;张三&quot;</span>, <span class="attr">age</span>: <span class="number">27</span> &#125;,</span><br><span class="line">  &#123; <span class="attr">name</span>: <span class="string">&quot;李四&quot;</span>, <span class="attr">age</span>: <span class="number">29</span> &#125;</span><br><span class="line">];</span><br><span class="line"><span class="keyword">let</span> index = people.<span class="title function_">findIndex</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item.<span class="property">age</span> === <span class="number">28</span>); </span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(index); <span class="comment">// -1</span></span><br></pre></td></tr></table></figure><h3 id="includes"><a href="#includes" class="headerlink" title="includes()"></a>includes()</h3><p>includes()方法返回一个布尔值,表示数组是否包含指定的元素。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">includes</span>(<span class="number">4</span>)); <span class="comment">// true</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(numbers.<span class="title function_">includes</span>(<span class="number">9</span>)); <span class="comment">// false</span></span><br></pre></td></tr></table></figure><h2 id="三、排序方法"><a href="#三、排序方法" class="headerlink" title="三、排序方法"></a>三、排序方法</h2><p>JavaScript提供了两个方法用于对数组进行排序。</p><h3 id="reverse"><a href="#reverse" class="headerlink" title="reverse()"></a>reverse()</h3><p>reverse()方法用于反转数组中的元素顺序,改变原数组。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> values = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>];</span><br><span class="line">values.<span class="title function_">reverse</span>();  </span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(values); <span class="comment">// [5,4,3,2,1]</span></span><br></pre></td></tr></table></figure><h3 id="sort"><a href="#sort" class="headerlink" title="sort()"></a>sort()</h3><p>sort()方法用于对数组进行排序,可以接受一个回调函数来指定排序规则。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> values = [<span class="number">4</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">1</span>, <span class="number">5</span>];</span><br><span class="line">values.<span class="title function_">sort</span>(<span class="function">(<span class="params">a,b</span>) =&gt;</span> a - b); <span class="comment">// 升序</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(values); <span class="comment">//[1, 2, 3, 4, 5]</span></span><br><span class="line"></span><br><span class="line">values.<span class="title function_">sort</span>(<span class="function">(<span class="params">a,b</span>) =&gt;</span> b - a); <span class="comment">// 降序</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(values); <span class="comment">//[5, 4, 3, 2, 1]  </span></span><br></pre></td></tr></table></figure><h2 id="四、操作方法"><a href="#四、操作方法" class="headerlink" title="四、操作方法"></a>四、操作方法</h2><p>JavaScript提供了一些常用的操作方法,用于对数组进行操作。</p><h3 id="join"><a href="#join" class="headerlink" title="join()"></a>join()</h3><p>join()方法将数组转换为字符串,并使用指定的分隔符连接数组的每一项,但不会改变原数组。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>, <span class="string">&quot;green&quot;</span>, <span class="string">&quot;blue&quot;</span>];</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors.<span class="title function_">join</span>(<span class="string">&quot;,&quot;</span>)); <span class="comment">// &quot;red,green,blue&quot; </span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors.<span class="title function_">join</span>(<span class="string">&quot;||&quot;</span>)); <span class="comment">// &quot;red||green||blue&quot;</span></span><br></pre></td></tr></table></figure><h3 id="slice"><a href="#slice" class="headerlink" title="slice()"></a>slice()</h3><p>slice()方法用于截取数组的一部分,返回一个新的数组,不会影响原数组。它接受两个参数,分别是开始截取的下标和结束的下标(不包含结束下标的元素)。如果只有一个参数,则从开始下标截取到末尾。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>, <span class="string">&quot;green&quot;</span>, <span class="string">&quot;blue&quot;</span>, <span class="string">&quot;yellow&quot;</span>, <span class="string">&quot;purple&quot;</span>]; </span><br><span class="line"><span class="keyword">let</span> colors2 = colors.<span class="title function_">slice</span>(<span class="number">1</span>); <span class="comment">// 从下标1开始截取到末尾</span></span><br><span class="line"><span class="keyword">let</span> colors3 = colors.<span class="title function_">slice</span>(<span class="number">1</span>, <span class="number">4</span>); <span class="comment">// 从下标1开始截取到下标4(不包含)</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;, &quot;yellow&quot;, &quot;purple&quot;]</span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors2); <span class="comment">// [&quot;green&quot;, &quot;blue&quot;, &quot;yellow&quot;, &quot;purple&quot;]  </span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors3); <span class="comment">// [&quot;green&quot;, &quot;blue&quot;, &quot;yellow&quot;]</span></span><br></pre></td></tr></table></figure><h3 id="concat"><a href="#concat" class="headerlink" title="concat()"></a>concat()</h3><p>concat()方法用于连接两个或多个数组,返回一个新的数组,不会改变原数组。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> colors = [<span class="string">&quot;red&quot;</span>, <span class="string">&quot;green&quot;</span>, <span class="string">&quot;blue&quot;</span>];</span><br><span class="line"><span class="keyword">let</span> colors2 = colors.<span class="title function_">concat</span>(<span class="string">&quot;yellow&quot;</span>, [<span class="string">&quot;black&quot;</span>, <span class="string">&quot;brown&quot;</span>]);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors); <span class="comment">// [&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;]  </span></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(colors2); <span class="comment">// [&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;, &quot;yellow&quot;, &quot;black&quot;, &quot;brown&quot;]</span></span><br></pre></td></tr></table></figure><h2 id="五、迭代方法"><a href="#五、迭代方法" class="headerlink" title="五、迭代方法"></a>五、迭代方法</h2><p>JavaScript提供了一些用于迭代数组的方法,它们可以帮助我们对数组进行遍历和处理。</p><h3 id="some"><a href="#some" class="headerlink" title="some()"></a>some()</h3><p>some()方法对数组中的每一项执行回调函数,如果有一项满足函数的条件,则返回true。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line"><span class="keyword">let</span> someResult = numbers.<span class="title function_">some</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item &gt; <span class="number">2</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(someResult); <span class="comment">// true</span></span><br></pre></td></tr></table></figure><h3 id="every"><a href="#every" class="headerlink" title="every()"></a>every()</h3><p>every()方法对数组中的每一项执行回调函数,如果每一项都满足函数的条件,则返回true。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line"><span class="keyword">let</span> everyResult = numbers.<span class="title function_">every</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item &gt; <span class="number">2</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(everyResult); <span class="comment">// false</span></span><br></pre></td></tr></table></figure><h3 id="forEach"><a href="#forEach" class="headerlink" title="forEach()"></a>forEach()</h3><p>forEach()方法对数组中的每一项执行回调函数,没有返回值,类似于for循环。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line">numbers.<span class="title function_">forEach</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> &#123;</span><br><span class="line">  <span class="comment">// 执行某些操作</span></span><br><span class="line">&#125;);</span><br></pre></td></tr></table></figure><h3 id="filter"><a href="#filter" class="headerlink" title="filter()"></a>filter()</h3><p>filter()方法对数组中的每一项执行回调函数,返回满足条件的项组成的新数组。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];  </span><br><span class="line"><span class="keyword">let</span> newNumbers = numbers.<span class="title function_">filter</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item &gt; <span class="number">2</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(newNumbers); <span class="comment">// [3, 4, 5, 4, 3]</span></span><br></pre></td></tr></table></figure><h3 id="map"><a href="#map" class="headerlink" title="map()"></a>map()</h3><p>map()方法对数组中的每一项执行回调函数,返回由每次函数调用的结果构成的新数组。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> numbers = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>];</span><br><span class="line"><span class="keyword">let</span> mapResult = numbers.<span class="title function_">map</span>(<span class="function">(<span class="params">item, index, array</span>) =&gt;</span> item * <span class="number">2</span>);</span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(mapResult); <span class="comment">// [2, 4, 6, 8, 10, 8, 6, 4, 2]</span></span><br></pre></td></tr></table></figure><h3 id="reduce"><a href="#reduce" class="headerlink" title="reduce()"></a>reduce()</h3><p>reduce()方法用于对数组进行累加操作,接受两个参数,第一个参数为回调函数,第二个参数为初始值。如果没有初始值,则默认为数组的第一个元素。</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> arr = [<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>];  </span><br><span class="line">arr.<span class="title function_">reduce</span>(<span class="function">(<span class="params">pre, item, index, arr</span>) =&gt;</span> &#123;</span><br><span class="line">  pre += item;</span><br><span class="line">  <span class="keyword">return</span> pre;</span><br><span class="line">&#125;, <span class="number">0</span>);</span><br></pre></td></tr></table></figure><h2 id="结语"><a href="#结语" class="headerlink" title="结语"></a>结语</h2><p>以上为js数组中常用的方法,掌握了这些方法,你可以更好地处理数组数据,在前端的学习和工作中发挥更大的作用。希望本文对你有所帮助!</p><p>请注意:本文为原创内容,转载请注明出处。</p>]]></content>
    
    
    <summary type="html">本文涵盖了JavaScript中常用的数组方法,包括增删改查、搜索和位置、排序、操作和迭代等方面。通过掌握这些方法,你可以更好地处理数组数据,提高前端学习和工作效率。</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="JavaScript" scheme="https://www.nablepart.com/tags/JavaScript/"/>
    
    <category term="数组" scheme="https://www.nablepart.com/tags/%E6%95%B0%E7%BB%84/"/>
    
  </entry>
  
  <entry>
    <title>How to Maximize Your Google Ad Placement for B2B Marketing</title>
    <link href="https://www.nablepart.com/6857150f9bf7/"/>
    <id>https://www.nablepart.com/6857150f9bf7/</id>
    <published>2023-10-18T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20231018121905.png"></p><blockquote><p>In today’s digital age, Google advertising has become an essential tool for businesses to reach their target audience. With the right strategies in place, B2B marketers can leverage Google ads to generate quality leads and drive conversions. </p></blockquote><h2 id="Keyword-Research-The-Foundation-of-Your-Google-Ad-Campaign"><a href="#Keyword-Research-The-Foundation-of-Your-Google-Ad-Campaign" class="headerlink" title="Keyword Research: The Foundation of Your Google Ad Campaign"></a>Keyword Research: The Foundation of Your Google Ad Campaign</h2><p>One of the crucial aspects of a successful Google ad campaign is selecting the right keywords. For B2B marketing, it is essential to choose highly relevant and precise keywords that will attract potential customers. Avoid the common mistake of targeting consumer-oriented keywords for B2B campaigns. To determine the suitability of a keyword, conduct a Google search and analyze the top results. If the search results predominantly display consumer-oriented businesses, it’s a clear indication that the keyword is not suitable for B2B marketing.</p><h2 id="Determine-the-Right-Bidding-Strategy"><a href="#Determine-the-Right-Bidding-Strategy" class="headerlink" title="Determine the Right Bidding Strategy"></a>Determine the Right Bidding Strategy</h2><p>Once you have identified the relevant keywords, it’s crucial to research and determine an appropriate bidding strategy. Setting your ad bids too low may result in limited exposure and fewer impressions, making it challenging to reach potential customers. Tools like Keyword Planner or SEMrush can help you research and establish optimal bidding values based on the competitiveness of the keywords.</p><h2 id="Landing-Pages-and-Funnels-Creating-a-Clear-Path-to-Conversion"><a href="#Landing-Pages-and-Funnels-Creating-a-Clear-Path-to-Conversion" class="headerlink" title="Landing Pages and Funnels: Creating a Clear Path to Conversion"></a>Landing Pages and Funnels: Creating a Clear Path to Conversion</h2><p>In B2B marketing, it is recommended to create specific landing pages and funnels rather than directing ads to the homepage or other general pages of your website. Landing pages provide a clear and concise marketing message, making it easier for potential customers to understand your offerings and take the desired action. Additionally, implementing a funnel approach in your ad campaigns can be highly effective. For instance, instead of immediately asking for a quote, you can guide potential customers through a series of steps to gradually activate their interest and encourage them to inquire about your products or services.</p><h2 id="Creating-Your-Google-Ad-Campaign"><a href="#Creating-Your-Google-Ad-Campaign" class="headerlink" title="Creating Your Google Ad Campaign"></a>Creating Your Google Ad Campaign</h2><p>Once you have completed the necessary preparations, it’s time to create your Google ad campaign. The following steps outline the fundamental process:</p><h3 id="Define-Your-Advertising-Goals"><a href="#Define-Your-Advertising-Goals" class="headerlink" title="Define Your Advertising Goals"></a>Define Your Advertising Goals</h3><p>Before creating your ads, it’s important to define your advertising goals clearly. Determine the specific product, service, or website you want to promote and establish the objectives you aim to achieve, such as increasing website traffic or improving conversion rates.</p><h3 id="Select-Relevant-Keywords"><a href="#Select-Relevant-Keywords" class="headerlink" title="Select Relevant Keywords"></a>Select Relevant Keywords</h3><p>Choose keywords that are closely related to your ad theme. By selecting keywords that align with your offerings, your ads have a higher chance of appearing in search results when users search for those specific terms on Google.</p><h3 id="Create-Campaigns"><a href="#Create-Campaigns" class="headerlink" title="Create Campaigns"></a>Create Campaigns</h3><p>In Google Ads, campaigns are organizational units that group your ads. When creating a campaign, you need to set parameters such as budget, geographic targeting, and scheduling.</p><h3 id="Create-Ad-Groups"><a href="#Create-Ad-Groups" class="headerlink" title="Create Ad Groups"></a>Create Ad Groups</h3><p>Ad groups further refine your campaign structure by grouping together ads and related keywords. Organize your ad groups based on product categories, targeting options, or ad types.</p><h3 id="Craft-Compelling-Ad-Copy"><a href="#Craft-Compelling-Ad-Copy" class="headerlink" title="Craft Compelling Ad Copy"></a>Craft Compelling Ad Copy</h3><p>Write engaging and persuasive ad copy for each ad group. Ensure that your ad copy aligns with the selected keywords and encourages users to click on your ads.</p><h3 id="Set-Bids-and-Budgets"><a href="#Set-Bids-and-Budgets" class="headerlink" title="Set Bids and Budgets"></a>Set Bids and Budgets</h3><p>Set appropriate bids for each keyword based on your budget and advertising goals. Additionally, establish a total budget for each campaign to ensure that your ad spend remains within your limits.</p><h3 id="Implement-Conversion-Tracking"><a href="#Implement-Conversion-Tracking" class="headerlink" title="Implement Conversion Tracking"></a>Implement Conversion Tracking</h3><p>If your goal is to increase conversions, consider implementing conversion tracking. This allows you to track and measure the performance of your ads in terms of generating desired actions, such as purchases or registrations.</p><h3 id="Review-and-Launch-Your-Ads"><a href="#Review-and-Launch-Your-Ads" class="headerlink" title="Review and Launch Your Ads"></a>Review and Launch Your Ads</h3><p>Once you have created your ads, Google Ads will review them to ensure compliance with their advertising policies. After approval, you can launch your campaign, and your ads will start appearing in Google search results.</p><h2 id="Best-Practices-for-Google-Ad-Placement"><a href="#Best-Practices-for-Google-Ad-Placement" class="headerlink" title="Best Practices for Google Ad Placement"></a>Best Practices for Google Ad Placement</h2><p>While creating Google ad campaigns is relatively straightforward, it’s essential to pay attention to specific details to maximize your ad placement performance. Here are some best practices to consider:</p><ul><li><p>Avoid Broad Match Keywords Initially: To ensure more precise targeting and avoid wasting ad spend, it is recommended to avoid using broad match keywords at the beginning of your campaign.</p></li><li><p>Limit Keywords per Ad Group: Restrict the number of keywords to a maximum of 30 per ad group to maintain relevancy and focus.</p></li><li><p>Optimal Number of Ad Groups: Aim for no more than five ad groups within a campaign to maintain organization and manageability.</p></li><li><p>Multiple Ads per Ad Group: Include at least two ads per ad group to test different messaging and determine which performs better.</p></li></ul><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>By following the steps outlined in this article, B2B marketers can create highly effective Google ad campaigns to reach their target audience and generate valuable leads. Remember to perform thorough keyword research, design compelling landing pages and funnels, and carefully structure your campaigns and ad groups. With continuous optimization and monitoring, you can maximize your Google ad placement and achieve your advertising goals in the B2B market.<br>Remember, success in Google ad placement requires strategic planning, continuous optimization, and a deep understanding of your target audience. Implement these best practices and enjoy the benefits of a successful B2B Google ad campaign.<br>Liked this article? Don’t forget to bookmark it for future reference!</p>]]></content>
    
    
    <summary type="html">In this article, we&#39;ll explore the key steps involved in creating an effective Google Ads campaign for B2B marketing, from keyword research to ad creation and optimization.</summary>
    
    
    
    <category term="e-commerce" scheme="https://www.nablepart.com/categories/e-commerce/"/>
    
    
    <category term="optimization" scheme="https://www.nablepart.com/tags/optimization/"/>
    
    <category term="keyword" scheme="https://www.nablepart.com/tags/keyword/"/>
    
    <category term="Google advertising" scheme="https://www.nablepart.com/tags/Google-advertising/"/>
    
    <category term="B2B" scheme="https://www.nablepart.com/tags/B2B/"/>
    
  </entry>
  
  <entry>
    <title>Lei Jun unveils the complete system architecture of Xiaomi&#39;s Surge S1 OS, stating that the underlying system has been completely rebuilt.</title>
    <link href="https://www.nablepart.com/349df88f1e57/"/>
    <id>https://www.nablepart.com/349df88f1e57/</id>
    <published>2023-10-17T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>Lei Jun has just published a long article again to warm up the Xiaomi Surge OS, officially announcing the complete system architecture.</p><p>According to reports, from the beginning of the architectural design, Xiaomi has clearly defined four goals:</p><ul><li>First, to achieve the strongest single-end performance;</li><li>Second, AI empowerment, to become the “intelligent brain” of the entire ecosystem, able to provide users with active services;</li><li>Third, more convenient and efficient connection;</li><li>Fourth, to realize the whole end of the privacy and security solid protection.</li></ul><p>The fourth is to realize the strong protection of all-end privacy and security.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310301429286.png"></p><p>At the bottom of the system kernel layer, Xiaomi has fused the self-developed Vela system kernel with the deeply modified Linux system kernel, reconfiguring various basic modules such as performance scheduling, task management, memory management, file management, and so on, to achieve a significant increase in performance and efficiency.</p><p>This new fusion kernel <strong>supports more than 200 processor platforms and more than 20 file systems</strong>, and can be flexibly configured according to the difference in hardware capabilities, providing excellent compatibility and completely freeing up the performance of each individual device.</p><p>In the service and framework layer on top of the system kernel layer, Xiaomi has incorporated Android’s service framework and the service framework of its own Vela system as “middleware”; at the same time, it has newly created 8 major subsystems, of which the new AI subsystem integrates the ability of large models and becomes the “ intelligent brain” of the whole system. The new AI subsystem integrates the capabilities of large models and becomes the “intelligent brain” of the whole system <strong>, which not only enables single devices to realize strong end-side AI capabilities, but also empowers the entire ecosystem with intelligent capabilities</strong>.</p><p>The top layer of HyperConnect cross-end layer, Xiaomi allows all devices to unify the connection protocol, and real-time communication, and ultimately build a “people, cars and home ecological” intelligent world.</p><p>It is worth mentioning that Xiaomi Surge OS has created a full-end security system through the kernel layer, service framework layer, cross-end layer, especially the kernel layer, <strong>Xiaomi has enabled a completely independent “self-research micro-kernel security system”, which guarantees security from the bottom to realize</strong>.</p><p>Finally attached Lei Jun original text:</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310301428834.png"></p>]]></content>
    
    
    <summary type="html">At the lowest level of the system kernel, Xiaomi integrates its self-developed Vela system kernel with a deeply modified Linux system kernel, reconstructing various basic modules such as performance scheduling, task management, memory management, and file management, achieving significant improvements in performance and efficiency.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="Xiaomi" scheme="https://www.nablepart.com/tags/Xiaomi/"/>
    
    <category term="Lei Jun" scheme="https://www.nablepart.com/tags/Lei-Jun/"/>
    
  </entry>
  
  <entry>
    <title>基于 Kotlin 协程的 Android 并发编程指南</title>
    <link href="https://www.nablepart.com/886ce33a3bd7/"/>
    <id>https://www.nablepart.com/886ce33a3bd7/</id>
    <published>2023-10-17T02:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<h2 id="引言"><a href="#引言" class="headerlink" title="引言"></a>引言</h2><p>在现代Android应用开发中,协程(Coroutine)已经成为一种不可或缺的技术。它不仅简化了异步编程,还提供了许多强大的工具和功能,可以在高阶场景中发挥出色的表现。本文将深入探讨Android并发编程的七个必要知识点,帮助开发者更好地利用协程来构建高效的Android应用。</p><h2 id="1-协程基础"><a href="#1-协程基础" class="headerlink" title="1. 协程基础"></a>1. 协程基础</h2><p>协程是一种能够在代码中实现顺序性操作的同时处理异步任务的并发机制。它不仅能够简化异步编程,还可以提高代码的可读性和维护性。协程通过挂起函数(suspend函数)实现异步操作,而不会阻塞线程。</p><p>在Kotlin中,使用launch函数创建和启动协程,它返回一个Job实例,代表了协程的生命周期。协程代码块位于launch函数的大括号内。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> &#123;</span><br><span class="line">  <span class="comment">// 创建协程</span></span><br><span class="line">  <span class="keyword">val</span> job = GlobalScope.launch &#123;</span><br><span class="line">    <span class="comment">// 协程代码块</span></span><br><span class="line">    delay(<span class="number">1000</span>)</span><br><span class="line">    println(<span class="string">&quot;Hello from Coroutine!&quot;</span>)</span><br><span class="line">  &#125;</span><br><span class="line">  </span><br><span class="line">  <span class="comment">// 等待协程完成</span></span><br><span class="line">  runBlocking &#123;</span><br><span class="line">    job.join()</span><br><span class="line">  &#125;  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>取消协程是一种优雅地结束协程的方式,避免资源泄漏。协程可以通过调用cancel函数来取消。另外,当协程的父协程被取消时,所有的子协程也会被取消。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> = runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> job = launch &#123;</span><br><span class="line">    <span class="keyword">try</span> &#123;</span><br><span class="line">      delay(<span class="number">1000</span>)</span><br><span class="line">      println(<span class="string">&quot;Coroutine completed.&quot;</span>)  </span><br><span class="line">    &#125; <span class="keyword">catch</span> (e: CancellationException) &#123;</span><br><span class="line">      println(<span class="string">&quot;Coroutine was cancelled.&quot;</span>)</span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line"></span><br><span class="line">  delay(<span class="number">500</span>)</span><br><span class="line">  job.cancel() <span class="comment">// 取消协程</span></span><br><span class="line">  job.join()  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程内部的异常可以通过try和catch来捕获和处理。如果协程内部抛出异常,它会被传递到协程的调用者处。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> kotlinx.coroutines.*</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">fun</span> <span class="title">main</span><span class="params">()</span></span> = runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> job = launch &#123;</span><br><span class="line">    <span class="keyword">try</span> &#123;</span><br><span class="line">      <span class="keyword">throw</span> Exception(<span class="string">&quot;Something went wrong&quot;</span>)</span><br><span class="line">    &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">      println(<span class="string">&quot;Exception caught: <span class="subst">$&#123;e.message&#125;</span>&quot;</span>)</span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line"></span><br><span class="line">  job.join()</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="2-上下文与调度器"><a href="#2-上下文与调度器" class="headerlink" title="2. 上下文与调度器"></a>2. 上下文与调度器</h2><p>协程上下文和调度器是Kotlin Coroutine中的核心概念,它们决定了协程的执行环境和线程。合理使用不同的调度器,可以使协程在不同的线程上高效地执行,从而实现并发处理和性能优化。</p><p>协程上下文是协程运行时的环境,包含了许多不同的元素,如调度器、异常处理器等。调度器(Dispatcher)是上下文的一部分,它决定了协程在哪个线程上执行。Kotlin提供了几种内置的调度器,例如Dispatchers.Main、Dispatchers.IO、Dispatchers.Default等。</p><p>使用不同的调度器,我们可以在不同的线程上执行协程代码,从而优化并发处理和性能。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.IO) &#123;</span><br><span class="line">  <span class="comment">// 在IO线程上执行协程代码,适用于网络请求和文件操作</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch(Dispatchers.Default) &#123;</span><br><span class="line">  <span class="comment">// 在默认的线程池上执行协程代码,适用于CPU密集型操作  </span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>使用withContext函数可以在协程内部切换线程,从而避免阻塞主线程,同时保持协程的执行上下文。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="comment">// 在IO线程上执行异步操作</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 在UI线程处理结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>除了内置的调度器,你还可以创建自定义的调度器来满足特定需求,例如使用特定的线程池或调度算法。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> customDispatcher = Executors.newFixedThreadPool(<span class="number">4</span>).asCoroutineDispatcher()</span><br><span class="line"></span><br><span class="line">launch(customDispatcher) &#123;</span><br><span class="line">  <span class="comment">// 在自定义调度器上执行协程代码 </span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程上下文和调度器的合理使用可以使协程在不同的线程上高效地执行,并发处理和性能优化为异步编程带来更多便利。</p><h2 id="3-挂起函数"><a href="#3-挂起函数" class="headerlink" title="3. 挂起函数"></a>3. 挂起函数</h2><p>挂起函数是Kotlin Coroutine中的重要组成部分,它允许在协程中优雅地处理异步操作。通过掌握挂起函数的调用、编写和异常处理,你可以更好地在协程中处理异步操作,确保代码的可靠性和稳定性。</p><p>挂起函数是具有suspend关键字修饰的函数,它可以在协程内部被挂起,等待某个操作完成后再继续执行。典型的例子包括网络请求、文件读写、数据库查询等异步操作。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">suspend</span> <span class="function"><span class="keyword">fun</span> <span class="title">fetchUserData</span><span class="params">()</span></span>: UserData &#123;</span><br><span class="line">  <span class="comment">// 执行异步操作,等待数据返回</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程内部调用挂起函数是直接的,你可以像调用普通函数一样调用挂起函数,而无需关心线程的切换。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> userData = fetchUserData()</span><br><span class="line">  <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure><p>在协程中,异常处理是非常重要的一部分。使用try和catch来捕获挂起函数中抛出的异常,确保代码的健壮性。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">try</span> &#123;</span><br><span class="line">    <span class="keyword">val</span> userData = fetchUserData()</span><br><span class="line">    <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">    <span class="comment">// 处理异常情况</span></span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>当协程被取消时,挂起函数也会被取消。协程的取消机制可以确保及时释放资源,避免资源泄漏。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">try</span> &#123;</span><br><span class="line">    <span class="keyword">val</span> userData = fetchUserData()</span><br><span class="line">    <span class="comment">// 处理获取到的用户数据</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: CancellationException) &#123;</span><br><span class="line">    <span class="comment">// 协程被取消时的处理</span></span><br><span class="line">  &#125; <span class="keyword">catch</span> (e: Exception) &#123;</span><br><span class="line">    <span class="comment">// 其他异常情况</span></span><br><span class="line">  &#125;  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程范围(coroutineScope函数)可以在挂起函数内部创建新的协程,它会等待所有的子协程完成后再继续执行。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">suspend</span> <span class="function"><span class="keyword">fun</span> <span class="title">performMultipleTasks</span><span class="params">()</span></span> = coroutineScope &#123;</span><br><span class="line">  <span class="keyword">val</span> result1 = async &#123; fetchFromNetwork() &#125;  </span><br><span class="line">  <span class="keyword">val</span> result2 = async &#123; fetchFromDatabase() &#125;</span><br><span class="line">  <span class="keyword">val</span> combinedResult = result1.await() + result2.await()</span><br><span class="line">  <span class="comment">// 处理并发任务的结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>挂起函数是Kotlin Coroutine中的重要组成部分,通过掌握挂起函数的概念、调用、编写和异常处理,你可以更好地在协程中处理异步操作,确保代码的可靠性和稳定性。</p><h2 id="4-协程作用域"><a href="#4-协程作用域" class="headerlink" title="4. 协程作用域"></a>4. 协程作用域</h2><p>协程作用域为我们提供了一种优雅且可控的方式来管理协程的生命周期和范围。通过合理地创建作用域并结合结构化并发,我们可以避免资源泄漏、提高代码的可读性,并确保协程在正确的上下文中执行,为异步编程带来更多便利。</p><p>协程作用域是一个上下文(CoroutineScope)的实例,用于创建和管理相关联的协程。通过将协程限定在特定的作用域内,我们可以更好地控制它们的生命周期。协程作用域通常与Activity、Fragment或ViewModel等相关联,以确保在组件销毁时取消所有协程,避免资源泄漏。</p><p>在Kotlin中,我们可以使用CoroutineScope来创建协程作用域。例如,在Activity中:</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">MyActivity</span> : <span class="type">AppCompatActivity</span>(), CoroutineScope <span class="keyword">by</span> CoroutineScope(Dispatchers.Main) &#123;</span><br><span class="line"></span><br><span class="line">  <span class="comment">// ...</span></span><br><span class="line"></span><br><span class="line">  <span class="keyword">override</span> <span class="function"><span class="keyword">fun</span> <span class="title">onDestroy</span><span class="params">()</span></span> &#123;</span><br><span class="line">    <span class="keyword">super</span>.onDestroy()</span><br><span class="line">    cancel() <span class="comment">// 取消协程作用域内的所有协程</span></span><br><span class="line">  &#125;  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程作用域内启动协程时,它们会继承作用域的上下文和调度器。这意味着它们将在相同的线程上运行,并受到相同的取消影响。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="comment">// 在协程作用域内启动协程</span></span><br><span class="line">  <span class="comment">// 该协程将继承外部作用域的上下文和调度器</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程作用域可以嵌套,内部作用域的协程会继承外部作用域的上下文。这使得我们可以在更细粒度的范围内管理协程的生命周期。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">MyActivity</span> : <span class="type">AppCompatActivity</span>(), CoroutineScope <span class="keyword">by</span> CoroutineScope(Dispatchers.Main) &#123;</span><br><span class="line"></span><br><span class="line">  <span class="comment">// ...</span></span><br><span class="line"></span><br><span class="line">  <span class="function"><span class="keyword">fun</span> <span class="title">performMultipleTasks</span><span class="params">()</span></span> = launch &#123;    </span><br><span class="line">    <span class="comment">// 在外部作用域的协程内启动协程</span></span><br><span class="line">    launch &#123;</span><br><span class="line">      <span class="comment">// 在内部作用域的协程内启动协程 </span></span><br><span class="line">    &#125;</span><br><span class="line">  &#125;  </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>结构化并发是协程作用域的一个重要特性,它可以确保在作用域中的所有协程完成后才继续执行。这有助于避免竞态条件和资源泄漏。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">runBlocking &#123;</span><br><span class="line">  <span class="comment">// 在结构化并发作用域内启动协程</span></span><br><span class="line">  launch &#123;</span><br><span class="line">    <span class="comment">// 协程1</span></span><br><span class="line">  &#125;</span><br><span class="line">  launch &#123;</span><br><span class="line">    <span class="comment">// 协程2</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 等待所有协程完成后继续</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程作用域为我们提供了一种优雅且可控的方式来管理协程的生命周期和范围。通过合理地创建作用域并结合结构化并发,我们可以避免资源泄漏、提高代码的可读性,并确保协程在正确的上下文中执行,为异步编程带来更多便利。</p><h2 id="5-并发与顺序性"><a href="#5-并发与顺序性" class="headerlink" title="5. 并发与顺序性"></a>5. 并发与顺序性</h2><p>在异步编程中,既需要处理多个任务的并发执行,也需要确保一些操作按照特定的顺序执行。Kotlin Coroutine提供了灵活的机制来处理并发和顺序性操作,同时能够简化多个协程的组合。</p><p>协程使并发任务的管理变得非常直观。通过使用launch函数,我们可以在不同的协程中同时执行多个任务,而这些协程可以在相同的作用域内运行,继承相同的上下文和调度器。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> result1 = async &#123; fetchFromNetwork() &#125;</span><br><span class="line">  <span class="keyword">val</span> result2 = async &#123; fetchFromDatabase() &#125;</span><br><span class="line">  <span class="keyword">val</span> combinedResult = result1.await() + result2.await()</span><br><span class="line">  <span class="comment">// 处理并发任务的结果</span></span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure><p>有时,我们需要确保一些操作按照特定的顺序执行,例如先从数据库读取数据,然后再进行网络请求。协程提供了async函数来实现这种顺序性操作,通过await等待前一个操作的完成。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">val</span> dataFromDatabase = async &#123; fetchFromDatabase() &#125;.await()</span><br><span class="line">  <span class="keyword">val</span> updatedData = async &#123; performNetworkRequest(dataFromDatabase) &#125;.await()</span><br><span class="line">  <span class="comment">// 处理顺序性操作的结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在复杂的场景中,可能需要组合多个协程的执行流程,以满足特定的需求。async和await的组合,以及协程的结构化并发,可以帮助我们实现这种复杂的协程调度。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">runBlocking &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="keyword">val</span> dataFromDatabase = async &#123; fetchFromDatabase() &#125;.await()</span><br><span class="line">    <span class="keyword">val</span> updatedData = async &#123; performNetworkRequest(dataFromDatabase) &#125;.await()</span><br><span class="line">    <span class="comment">// 更多操作...</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 处理最终结果</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>并发与顺序性是异步编程中常见的需求,Kotlin Coroutine提供了灵活且简洁的机制来处理这些需求。通过合理地使用launch、async、await和结构化并发,我们可以轻松地处理多个任务的并发执行和顺序性操作。</p><h2 id="6-协程间通信"><a href="#6-协程间通信" class="headerlink" title="6. 协程间通信"></a>6. 协程间通信</h2><p>在并发编程中,协程间的通信非常重要。Kotlin Coroutine提供了多种方式来实现协程间的通信,例如使用通道(Channel)进行数据交换和协程间的协作。</p><p>协程通道是一种能够在多个协程之间传递数据的并发原语。它类似于队列,支持发送(send)和接收(receive)操作。通过通道,我们可以实现协程间的数据共享和同步。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> channel = Channel&lt;<span class="built_in">Int</span>&gt;()</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    delay(<span class="number">1000</span>)  </span><br><span class="line">    channel.send(it)</span><br><span class="line">  &#125;</span><br><span class="line">  channel.close()</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  <span class="keyword">for</span> (value <span class="keyword">in</span> channel) &#123;</span><br><span class="line">    println(value) </span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程间的协作是一种更高级的通信方式,通过协程的挂起和恢复来实现。例如,使用yield函数可以让出当前协程的执行权,让其他协程有机会执行。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    println(<span class="string">&quot;Coroutine 1&quot;</span>)</span><br><span class="line">    yield()</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">launch &#123;</span><br><span class="line">  repeat(<span class="number">5</span>) &#123;</span><br><span class="line">    println(<span class="string">&quot;Coroutine 2&quot;</span>)</span><br><span class="line">    yield()</span><br><span class="line">  &#125; </span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>协程间通信是并发编程中的重要部分,Kotlin Coroutine提供了多种机制来实现协程间的数据共享和协作。通过合理地使用通道和协程的挂起与恢复,我们可以实现灵活且高效的协程间通信。</p><h2 id="7-协程在UI线程中的使用"><a href="#7-协程在UI线程中的使用" class="headerlink" title="7. 协程在UI线程中的使用"></a>7. 协程在UI线程中的使用</h2><p>在Android应用开发中,协程可以在UI线程中使用,从而实现非阻塞的异步操作。这可以避免阻塞主线程,提高用户界面的响应性能。</p><p>在Android中,可以使用Dispatchers.Main调度器将协程的执行切换到主线程。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.Main) &#123;</span><br><span class="line">  <span class="comment">// 在UI线程上执行协程代码</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>在协程中执行UI操作时,需要注意避免长时间的阻塞操作,以免影响用户界面的流畅性。可以使用withContext将耗时的操作切换到后台线程,然后在UI线程上处理结果。</p><figure class="highlight kotlin"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">launch(Dispatchers.Main) &#123;</span><br><span class="line">  <span class="keyword">val</span> result = withContext(Dispatchers.IO) &#123;</span><br><span class="line">    <span class="comment">// 在后台线程执行耗时操作</span></span><br><span class="line">  &#125;</span><br><span class="line">  <span class="comment">// 在UI线程处理结果</span></span><br><span class="line">&#125; </span><br></pre></td></tr></table></figure><p>协程在UI线程中的使用可以提高Android应用的响应性能,避免阻塞主线程。通过合理地使用Dispatchers.Main调度器和withContext函数,我们可以优化UI操作的执行,提升用户体验。</p><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>本文深入探讨了Android并发编程中的七个必要知识点,包括协程基础、上下文与调度器、挂起函数、协程作用域、并发与顺序性、协程间通信和协程在UI线程中的使用。通过合理地使用这些知识点,开发者可以更好地利用协程来构建高效的Android应用。同时,本文强调了合理使用协程的重要性,以避免资源泄漏和提高代码的可读性和维护性。</p>]]></content>
    
    
    <summary type="html">文章全面介绍了如何合理利用协程来实现Android中的异步编程、并发处理、性能优化等,旨在帮助Android开发者更好地使用这一强大的并发编程工具。</summary>
    
    
    
    <category term="教程指南" scheme="https://www.nablepart.com/categories/%E6%95%99%E7%A8%8B%E6%8C%87%E5%8D%97/"/>
    
    
    <category term="Android" scheme="https://www.nablepart.com/tags/Android/"/>
    
    <category term="Kotlin" scheme="https://www.nablepart.com/tags/Kotlin/"/>
    
    <category term="Coroutines" scheme="https://www.nablepart.com/tags/Coroutines/"/>
    
    <category term="性能优化" scheme="https://www.nablepart.com/tags/%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96/"/>
    
    <category term="并发控制" scheme="https://www.nablepart.com/tags/%E5%B9%B6%E5%8F%91%E6%8E%A7%E5%88%B6/"/>
    
    <category term="Android UI" scheme="https://www.nablepart.com/tags/Android-UI/"/>
    
  </entry>
  
  <entry>
    <title>AIGC产品对我的生活有什么影响？</title>
    <link href="https://www.nablepart.com/37aabdca3a84/"/>
    <id>https://www.nablepart.com/37aabdca3a84/</id>
    <published>2023-10-15T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="AIGC产品对我的生活有什么影响？"><a href="#AIGC产品对我的生活有什么影响？" class="headerlink" title="AIGC产品对我的生活有什么影响？"></a>AIGC产品对我的生活有什么影响？</h2><p>我使用了这些AIGC产品几天，我相信中国没有一个同级别的产品。它们在语音转文字和特定领域问题识别方面表现出色，但在通用性方面比较弱，并且缺乏像人类那样将所有信息综合到一个适当准确的答案中的能力。与ChatGPT相比，苹果Siri和微软Cortana都比较愚蠢和单一功能。其他产品包括：</p><p>微软的小冰：一个非常受欢迎的AI聊天机器人，可以与用户就各种话题聊天甚至写诗。<br>Lark Technologies的Larkbot：一个面向企业的AI聊天机器人，可以处理公司的人力资源和行政任务。<br>图灵机器人工业的BabyQ：一个可以回答问题并与用户进行对话的AI聊天机器人。<br>Sengled的智能家居AI系统：支持中文语音命令控制的智能家居AI系统。<br>百度的DuerOS：一个可以集成到各种设备和应用程序中的对话式AI平台。</p><p>尽管现在ChatGPT无法处理一些复杂的例行专业工作，但它可以帮助我做以下几点：</p><ol><li>快速找到一个新领域的基本介绍。（因为它具有很强的归纳能力，可以轻松列出我想了解的一切的重点。）</li><li>陪伴我，回答一些个人问题和代码补全。</li><li>写一篇文章、标题和报告的大纲。</li><li>处理一些日常枯燥的文字工作，如总结要点、语法检查、缩短句子等。</li><li>非常道德和礼貌，安全而温暖，就像JARVIS一样。</li></ol><p><img src="https://cdn-images-1.medium.com/max/2836/1*C75u22hIi-wHu2xc0GIpSw.png" alt="AIGC给我们提供了一些学习资源。"></p><p><img src="https://cdn-images-1.medium.com/max/2868/1*OR8zppdIuI34EmooD4FvNw.png" alt="AIGC为我们生成标题。"></p><p>我将在我的日常工作中更多地使用它，对中国的语言模型没有太多期望。以往的经验告诉我，即使我们拥有一个一流的产品，我们的商业模式也会利用它，而不是造福于公众。</p><p>那么关于AI生成的内容图片呢？我有一些使用DALLE、稳定扩散和Midjourney的经验。我必须承认它们在让你的模糊你的模糊思绪变得清晰方面非常强大。</p><p>当我们工作时，经常会遇到这样的场景：某人想要探索一个新领域，他希望将顶点清晰并加上一些背景信息，所以他需要一些AIGC工具来帮助自己，来更多地想象（这也是Midjourney的命令词）。这些产品可以在你的梦境或潜意识中给你展示一幅画面，你可以稍微改变它的味道，提示它直到你满意为止。这些AIGC工具是处理这些问题的最佳选择。<strong>它们节省你的时间和好奇心，并在你的大脑中提供一些史诗般且富有想象力的概念图像，使你能够更快地学习东西</strong><em>.</em> 这是它们的主要价值。</p><p><img src="https://cdn-images-1.medium.com/max/3824/1*P4jaRi75pP3kEj-i2OT07g.png" alt="像扎哈所灵感的未来建筑的形象"></p><p><img src="https://cdn-images-1.medium.com/max/2048/1*u45hvUY3sR1fh9Vu-Re-yQ.png" alt="像皮克斯风格的AIGC边境牧羊犬角色"></p><p><img src="https://cdn-images-1.medium.com/max/2048/1*U81KMtAZajI0HCadYF3hiA.png" alt="像《英雄联盟》风格的AIGC头像"></p><p>至于另一种用途，你可以使用这些工具对你的肖像进行像皮克斯、吉卜力、迪士尼或漫威等风格的Photoshop处理，并在其中添加一些电影特效… …它释放了你的想象力！虽然现在在细节方面还不太完善，但我相信它们会变得更好。至于哪个产品更好，毫无疑问是Midjourney。</p>]]></content>
    
    
    <summary type="html">AIGC产品对生活的影响，包括语音转文字、特定领域问题识别、AI聊天机器人等功能。了解这些产品如何提高工作效率、激发想象力和节省时间。</summary>
    
    
    
    
    <category term="工作效率" scheme="https://www.nablepart.com/tags/%E5%B7%A5%E4%BD%9C%E6%95%88%E7%8E%87/"/>
    
    <category term="AIGC产品" scheme="https://www.nablepart.com/tags/AIGC%E4%BA%A7%E5%93%81/"/>
    
    <category term="语音转文字" scheme="https://www.nablepart.com/tags/%E8%AF%AD%E9%9F%B3%E8%BD%AC%E6%96%87%E5%AD%97/"/>
    
    <category term="问题识别" scheme="https://www.nablepart.com/tags/%E9%97%AE%E9%A2%98%E8%AF%86%E5%88%AB/"/>
    
    <category term="AI聊天机器人" scheme="https://www.nablepart.com/tags/AI%E8%81%8A%E5%A4%A9%E6%9C%BA%E5%99%A8%E4%BA%BA/"/>
    
    <category term="想象力" scheme="https://www.nablepart.com/tags/%E6%83%B3%E8%B1%A1%E5%8A%9B/"/>
    
    <category term="节省时间&#39;" scheme="https://www.nablepart.com/tags/%E8%8A%82%E7%9C%81%E6%97%B6%E9%97%B4/"/>
    
  </entry>
  
  <entry>
    <title>Diablo 4 won&#39;t be joining XGP this year</title>
    <link href="https://www.nablepart.com/6428e8fb5208/"/>
    <id>https://www.nablepart.com/6428e8fb5208/</id>
    <published>2023-10-10T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/11/02/QnFzXcmZA7Et492.png" alt="image.png"></p><h2 id="Diablo-4-not-joining-XGP-this-year-for-now"><a href="#Diablo-4-not-joining-XGP-this-year-for-now" class="headerlink" title="Diablo 4 not joining XGP this year for now"></a>Diablo 4 not joining XGP this year for now</h2><p>According to IGN reports, Microsoft’s acquisition of Activision Blizzard will be completed this week, and once this massive acquisition is completed, the platforms where Activision’s various games belong have become a hot topic among gamers.</p><p>Previously regarding Activision’s Call of Duty series, Microsoft has reached an agreement with rival Sony that “Call of Duty-related games will continue to be updated and run on PlayStation after the acquisition is complete.”</p><p>And just this week, Activision Blizzard mentioned on its official tweet (Official X) that there are no plans to include Diablo 4 and Call of Duty: Modern Warfare 3 in Game Pass this year for now.</p><p><img src="https://s2.loli.net/2023/11/02/c42uOGrKHbaEWqS.png" alt="image.png"></p><p>However, the official also mentioned that although this year can not be added, but is expected to next year will be a variety of Activision’s games will be added to the XGP, the specific time to join and game details need to wait for follow-up news.</p><h2 id="Roblox-is-launching-V-Land-a-virtual-world-program-designed-to-eliminate-“menstrual-shame"><a href="#Roblox-is-launching-V-Land-a-virtual-world-program-designed-to-eliminate-“menstrual-shame" class="headerlink" title="Roblox is launching V-Land, a virtual world program designed to eliminate “menstrual shame."></a>Roblox is launching V-Land, a virtual world program designed to eliminate “menstrual shame.</h2><p>Swedish hygiene products maker Essity and feminine care brand Saba are teaming up with multiplayer sandbox platform Roblox to launch a virtual world project called “V-Land,” which aims to teach underage children about the human body and eliminate the stigma associated with menstruation.</p><p><img src="https://s2.loli.net/2023/11/02/h8Gz4teC3OgsaoV.png" alt="image.png"></p><p>Essity’s 2022 Global Health and Wellness Survey found that only 55 percent of people think they understand menstruation, while another survey of women found that 44 percent of women feel uncomfortable talking about menstruation with male relatives such as fathers and brothers.</p><p>Essity wants to raise awareness about menstrual health through science in the form of a game, thereby removing the shame in the matter.</p><p>“We wanted to start an innovative program that educates while gaming. Hence the decision to launch V-Land in Roblox, a world where players can find tampons, blood clots or uteruses. Each element has been carefully selected to raise awareness about menstruation, and we want players to experience an emotionally charged and exciting adventure without ridicule or bullying.”</p><h2 id="EA-Announces-EA-Sports-FC-24-Has-Reached-11-3-Million-Users-Worldwide-in-First-Week-of-Release"><a href="#EA-Announces-EA-Sports-FC-24-Has-Reached-11-3-Million-Users-Worldwide-in-First-Week-of-Release" class="headerlink" title="EA Announces EA Sports FC 24 Has Reached 11.3 Million Users Worldwide in First Week of Release"></a>EA Announces EA Sports FC 24 Has Reached 11.3 Million Users Worldwide in First Week of Release</h2><p>EA’s classic soccer game series, FIFA, has officially released the game this year under the name “EA Sports FC 24” as it ended its partnership with FIFA last year.</p><p>Despite mixed feelings from veteran gamers about the “game of the year,” early reports indicate that physical sales of the game are down compared to its predecessor, and the Steam review rate is only 55%:</p><p><img src="https://s2.loli.net/2023/11/02/msKgHnFAURluqBd.png" alt="image.png"></p><p>But in reality, the game’s player base has only grown, with “11.3 million players worldwide” since FC24’s launch, according to Game Industry.</p><p>The statistics are the total number of players registered for the game on various platforms, including PS, Xbox, Switch, and PC platforms, which is naturally much higher than offline sales of the physical game, but it also shows that even though the game’s reputation has declined, it is still the game of choice for soccer fans and veteran players.</p><p>Cam Weber, president of EA Sports, said, “In addition to welcoming back millions of veteran players, the number of new players for FC24 is up nearly 20 percent year-over-year. This reflects the passion that fans everywhere have for the game. We are building the world’s largest soccer community with EA SPORTS FC and are just getting started.”</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Diablo 4 won&#39;t be joining XGP this year / Virtual project to eliminate &quot;menstrual shame&quot; launched</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="Diablo 4" scheme="https://www.nablepart.com/tags/Diablo-4/"/>
    
    <category term="XGP" scheme="https://www.nablepart.com/tags/XGP/"/>
    
    <category term="Activision&#39;s Call" scheme="https://www.nablepart.com/tags/Activision-s-Call/"/>
    
    <category term="Roblox" scheme="https://www.nablepart.com/tags/Roblox/"/>
    
    <category term="EA Sports FC 24" scheme="https://www.nablepart.com/tags/EA-Sports-FC-24/"/>
    
  </entry>
  
  <entry>
    <title>How to build a simple cryptocurrency blockchain using Node.js</title>
    <link href="https://www.nablepart.com/a5c14a55c1b0/"/>
    <id>https://www.nablepart.com/a5c14a55c1b0/</id>
    <published>2023-10-10T03:01:40.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Building-a-simple-cryptocurrency-blockchain-with-Node-js"><a href="#Building-a-simple-cryptocurrency-blockchain-with-Node-js" class="headerlink" title="Building a simple cryptocurrency blockchain with Node.js"></a>Building a simple cryptocurrency blockchain with Node.js</h2><p>The blockchain is an open, digitized, repeating ledger of transactions. The history of each new transaction is recorded and stored in an encrypted way that is difficult to change or modify. Copies of this recorded information are sent across the blockchain network. Thus, making it highly secure.</p><p>Cryptocurrency is a digitally secure currency used in most of the current trade. The use of cryptography plays an important role in ensuring the security of cryptocurrencies.</p><p>This ensures that only genuine transactions are recorded and logged. Most cryptocurrencies use the decentralized principle of blockchain technology.</p><p>In this tutorial, we will take a closer look at blockchain and decentralization in some detail. We will also build a simple cryptocurrency system called thecoin .</p><p>Thecoin is an implementation of a cryptocurrency that we will build in this article.</p><h3 id="Precondition"><a href="#Precondition" class="headerlink" title="Precondition"></a>Precondition</h3><p>In order to keep this tutorial running smoothly, you need to have a good understanding of the following.</p><ul><li><p>[JavaScript]</p></li><li><p>[Node.js]<br>First, you must have.</p></li><li><p>Node.js installed on your machine.</p></li><li><p>A code editor.</p></li></ul><h3 id="What-is-blockchain"><a href="#What-is-blockchain" class="headerlink" title="What is blockchain?"></a>What is blockchain?</h3><p>Bitcoin and Ether are digital cryptocurrencies powered and adopted by a powerful technology called blockchain. It uses cryptography to securely connect and maintain a growing list of records called blocks.</p><p>Blockchain, as the name suggests, is the growing blocks of transaction data that form a chain of transactions taking place. Valid transaction data is recorded into the blockchain network according to peer-to-peer rules set by the participants.</p><h3 id="Decentralization"><a href="#Decentralization" class="headerlink" title="Decentralization"></a>Decentralization</h3><p>Typically, data in a database is centralized. By centralizing, we operate based on only one server. The chances of risk are maximum due to the failure of the system. Also, decentralization allows data to be stored anywhere, thus making it faster, safer and a better way to store data.</p><p>Blockchain stores its information in several places. Whenever a new block is added to the blockchain, a copy is sent to all computers. This makes tampering with the blockchain very difficult because all the computers in the network must agree to the changes that are going to be made in order for the changes to be made.</p><p>By the end of this article, we will have a better understanding of blockchain and cryptocurrencies and how they work.</p><p>Let’s get into the code. I will name my application thecoin .</p><p>Create an application called thecoin.js and open it in your code editor.</p><p>In the development folder, let’s install the crypto library we’re going to use using the command.</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm install --save crypto-js</span><br></pre></td></tr></table></figure><p>We will use this library to import the modules in our project.</p><p>I will start by creating a class BlockCypto , as shown below.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">const</span> <span class="title class_">SHA256</span> = <span class="built_in">require</span>(<span class="string">&#x27;crypto-js/sha256&#x27;</span>);</span><br><span class="line"><span class="keyword">class</span> <span class="title class_">BlockCypto</span>&#123;</span><br><span class="line">    <span class="title function_">constructor</span>(<span class="params">index, current_time, info, nextHash=<span class="string">&quot; &quot;</span></span>)&#123;</span><br><span class="line">    <span class="variable language_">this</span>.<span class="property">index</span> = index;</span><br><span class="line">    <span class="variable language_">this</span>.<span class="property">current_time</span> = current_time;</span><br><span class="line">    <span class="variable language_">this</span>.<span class="property">info</span> = info;</span><br><span class="line">    <span class="variable language_">this</span>.<span class="property">nextHash</span> = nextHash;</span><br><span class="line">    <span class="variable language_">this</span>.<span class="property">hash</span> = <span class="variable language_">this</span>.<span class="title function_">computeHash</span>();</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="title function_">computeHash</span>(<span class="params"></span>)&#123;</span><br><span class="line">        <span class="keyword">return</span> <span class="title class_">SHA256</span>(<span class="variable language_">this</span>.<span class="property">info</span> + <span class="variable language_">this</span>.<span class="property">nextHash</span> + <span class="variable language_">this</span>.<span class="property">current_time</span> + <span class="title class_">JSON</span>.<span class="title function_">stringify</span>(<span class="variable language_">this</span>.<span class="property">info</span>)).<span class="title function_">toString</span>();</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>I will explain each part of the code here.</p><p>I’ve created a class for my BlockCrytpo . Block and added a constructor, just like any other JavaScript class.</p><p>In the constructor, we initialize its properties and assign parameters to it as follows.</p><ul><li><p>crypto-js&#x2F;sha256: This is the module we import to calculate the hash value for each block. We use the toString() method to convert it to a string as the module will return the object.</p></li><li><p>index:This is a unique number that tracks the index of each block in the blockchain.</p></li><li><p>current_time:As the name suggests, it keeps track of when each transaction was completed.</p></li><li><p>info:All completed transaction data is recorded and stored by this method.</p></li><li><p>nexthash:It points to the hash_key of the next block in the network chain. it is mainly used to keep and maintain the integrity of the blockchain.</p></li><li><p>computeHash:Based on the properties passed to this method, it is used to compute the hash key of the next block in the chain.</p></li></ul><h3 id="blockchain-theorem-in-calculus"><a href="#blockchain-theorem-in-calculus" class="headerlink" title="blockchain theorem (in calculus)"></a>blockchain theorem (in calculus)</h3><p>It is a type of database that stores a collection of data in groups with a certain storage capacity. These blocks are connected to the blocks that have been created and this forms a chain of data trees.</p><p>The chain is irreversible as the system is decentralized. Here, each block is assigned a timestamp when it is added to the chain.</p><p>Now, let’s create a class Blockchain , which will maintain this operation.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">class</span> <span class="title class_">Blockchain</span>&#123;</span><br><span class="line">    <span class="title function_">constructor</span>(<span class="params"></span>)&#123;</span><br><span class="line">        <span class="variable language_">this</span>.<span class="property">block1chain</span> = [<span class="variable language_">this</span>.<span class="title function_">startGenesisBlock</span>()];</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="title function_">initGenesisBlock</span>(<span class="params"></span>)&#123;</span><br><span class="line">        <span class="keyword">return</span> <span class="keyword">new</span> <span class="title class_">BlockCrypto</span>(<span class="number">0</span>, <span class="string">&quot;06/04/2021&quot;</span>, <span class="string">&quot;Initial Block in the Chain&quot;</span>, <span class="string">&quot;0&quot;</span>);</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="title function_">latestBlock</span>(<span class="params"></span>)&#123;</span><br><span class="line">        <span class="keyword">return</span> <span class="variable language_">this</span>.<span class="property">block1chain</span>[<span class="variable language_">this</span>.<span class="property">block1chain</span>.<span class="property">length</span> - <span class="number">1</span>];</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="title function_">addNewBlock</span>(<span class="params">newBlock</span>)&#123;</span><br><span class="line">        newBlock.<span class="property">nextHash</span> = <span class="variable language_">this</span>.<span class="title function_">latestBlock</span>().<span class="property">hash</span>;</span><br><span class="line">        newBlock.<span class="property">hash</span> = newBlock.<span class="title function_">computeHash</span>();</span><br><span class="line">        <span class="variable language_">this</span>.<span class="property">block1chain</span>.<span class="title function_">push</span>(newBlock);</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>Let’s understand the code snippet above.</p><p>As usual, we have our constructor which instantiates the block chain.</p><p>But this time, we pass it to the initGenesisBlock() method, which initializes the blocks in the chain. In our example, this property refers to an array of blocks.</p><ul><li>initGenesisBlock():This is the first block created in the peer-to-peer network and is not linked to any other block. According to our knowledge of indexing, its index is 0 .</li></ul><blockquote><p>Note that we created it using the BlockCrypto class we created earlier and passed all the arguments as parameters.</p></blockquote><ul><li><p>latestBlock:As named, we use it to find the last block added to the chain. As mentioned earlier, it helps to ensure the integrity of the chain by ensuring the hash value of the current block and mapping it to the hash value of the previous block.</p></li><li><p>addNewBlock:Using this method, a new block is added to the chain. The previous hash block is matched to the current hash block to ensure minimal or no tampering in the chain.</p></li></ul><p>Now, our blockchain is ready to work. We are missing something, which is the core principle of the blockchain, i.e. the integrity of the blockchain.</p><p>Let’s see how we can validate it and test our application.</p><h3 id="Verify-the-integrity-of-the-blockchain"><a href="#Verify-the-integrity-of-the-blockchain" class="headerlink" title="Verify the integrity of the blockchain"></a>Verify the integrity of the blockchain</h3><p>The main feature of the blockchain is that once a block is added to the network, it cannot be changed without invalidating the integrity of the entire blockchain.</p><p>To enforce this, we use digital security or cryptographic hashing to secure and verify the blockchain by generating a new hash every time a change is made in the block.</p><p>We will loop through the entire blockchain and check if any of the hashes have been tampered with, taking into account the exception of the first block, which is hardcoded.</p><p>In addition, this method verifies that the encryption keys of every two blocks in series point to each other. It will return false if the integrity of the blockchain is compromised; otherwise, it will return true if no exceptions are encountered.</p><p>We will create this method in the Blockchain class.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="title function_">checkValidity</span>(<span class="params"></span>)&#123;</span><br><span class="line">    <span class="comment">// Checking validity</span></span><br><span class="line">    <span class="keyword">for</span>(<span class="keyword">let</span> i = <span class="number">1</span>; i &lt; <span class="variable language_">this</span>.<span class="property">block1chain</span>.<span class="property">length</span>; i++) &#123;</span><br><span class="line">        <span class="keyword">const</span> currentBlock = <span class="variable language_">this</span>.<span class="property">block1chain</span>[i];</span><br><span class="line">        <span class="keyword">const</span> nextBlock= <span class="variable language_">this</span>.<span class="property">blockchain</span>[i-<span class="number">1</span>];</span><br><span class="line">    <span class="comment">// Checking current blcok hash</span></span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span>(currentBlock.<span class="property">hash</span> !== currentBlock.<span class="title function_">computeHash</span>()) &#123;</span><br><span class="line">        <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="comment">// Comparing current block hash with the next block</span></span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span>(currentBlock.<span class="property">nextHash</span> !== nextBlock.<span class="property">hash</span>) &#123;</span><br><span class="line">        <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>Now we can test our application and see the results.</p><p>But before we dive into running the code, let’s create a new instance of the Blockchain class and name it thecoin , and add some blocks to the blockchain using random values.</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> thecoin = <span class="keyword">new</span> <span class="title class_">Blockchain</span>();</span><br><span class="line"></span><br><span class="line">thecoin.<span class="title function_">addNewBlock</span>(<span class="keyword">new</span> <span class="title class_">BlockCrypto</span>(<span class="number">1</span>, <span class="string">&quot;06/04/2021&quot;</span>, &#123;<span class="attr">sender</span>: <span class="string">&quot;Rabin Yitzack&quot;</span>, <span class="attr">recipient</span>: <span class="string">&quot;Loyd Eve&quot;</span>, <span class="attr">quantity</span>: <span class="number">20</span>&#125;));</span><br><span class="line"></span><br><span class="line">thecoin.<span class="title function_">addNewBlock</span>(<span class="keyword">new</span> <span class="title class_">BlockCrypto</span>(<span class="number">2</span>, <span class="string">&quot;07/04/2021&quot;</span>, &#123;<span class="attr">sender</span>: <span class="string">&quot;Anita Vyona&quot;</span>, <span class="attr">recipient</span>: <span class="string">&quot;Felix Mush&quot;</span>, <span class="attr">quantity</span>: <span class="number">349</span>&#125;));</span><br><span class="line"></span><br><span class="line"><span class="variable language_">console</span>.<span class="title function_">log</span>(<span class="title class_">JSON</span>.<span class="title function_">stringify</span>(thecoin, <span class="literal">null</span>, <span class="number">4</span>));</span><br></pre></td></tr></table></figure><h3 id="Run-our-blockchain"><a href="#Run-our-blockchain" class="headerlink" title="Run our blockchain"></a>Run our blockchain</h3><p>Our terminal node thecoin.js , enter this command will result.</p><blockquote><p>Note: Before running this command, make sure to navigate to the correct path on your terminal.<br>Tip: Use the command pwd to check the path.</p></blockquote><h2 id="conclusions"><a href="#conclusions" class="headerlink" title="conclusions"></a>conclusions</h2><p>You’ve already built your own cryptocurrency using Node.js. This step brings you one step closer to getting you started building professional applications using Node.js, or alternatively, you can just add more features to our simple blockchain and share it with the market.</p><p>Nonetheless, I hope this tutorial has provided you with the basic skill proficiency to move forward in exciting Node.js development!</p>]]></content>
    
    
    <summary type="html">Thecoin is an implementation of a cryptocurrency that we will build in this article.</summary>
    
    
    
    <category term="Cryptocurrency" scheme="https://www.nablepart.com/categories/Cryptocurrency/"/>
    
    
    <category term="cryptocurrency" scheme="https://www.nablepart.com/tags/cryptocurrency/"/>
    
    <category term="Defi" scheme="https://www.nablepart.com/tags/Defi/"/>
    
    <category term="Node.js" scheme="https://www.nablepart.com/tags/Node-js/"/>
    
    <category term="node" scheme="https://www.nablepart.com/tags/node/"/>
    
  </entry>
  
  <entry>
    <title>iPhone 15 Device Adaptation - Device identification</title>
    <link href="https://www.nablepart.com/3e68f982eff7/"/>
    <id>https://www.nablepart.com/3e68f982eff7/</id>
    <published>2023-10-07T08:31:33.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="iPhone-15-device"><a href="#iPhone-15-device" class="headerlink" title="iPhone 15 device"></a>iPhone 15 device</h2><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">   if ([platform isEqualToString:@&quot;iPhone15,4&quot;]) return @&quot;iPhone 15&quot;;</span><br><span class="line">   if ([platform isEqualToString:@&quot;iPhone15,5&quot;]) return @&quot;iPhone 15 Plus&quot;;</span><br><span class="line">   if ([platform isEqualToString:@&quot;iPhone16,1&quot;]) return @&quot;iPhone 15 Pro&quot;;</span><br><span class="line">   if ([platform isEqualToString:@&quot;iPhone16,2&quot;]) return @&quot;iPhone 15 Pro Max&quot;;</span><br></pre></td></tr></table></figure><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br><span class="line">200</span><br><span class="line">201</span><br><span class="line">202</span><br><span class="line">203</span><br><span class="line">204</span><br><span class="line">205</span><br><span class="line">206</span><br><span class="line">207</span><br><span class="line">208</span><br><span class="line">209</span><br><span class="line">210</span><br><span class="line">211</span><br><span class="line">212</span><br><span class="line">213</span><br><span class="line">214</span><br><span class="line">215</span><br><span class="line">216</span><br><span class="line">217</span><br><span class="line">218</span><br><span class="line">219</span><br><span class="line">220</span><br><span class="line">221</span><br><span class="line">222</span><br><span class="line">223</span><br><span class="line">224</span><br><span class="line">225</span><br><span class="line">226</span><br><span class="line">227</span><br><span class="line">228</span><br><span class="line">229</span><br><span class="line">230</span><br><span class="line">231</span><br><span class="line">232</span><br><span class="line">233</span><br><span class="line">234</span><br><span class="line">235</span><br><span class="line">236</span><br><span class="line">237</span><br><span class="line">238</span><br><span class="line">239</span><br><span class="line">240</span><br><span class="line">241</span><br><span class="line">242</span><br><span class="line">243</span><br><span class="line">244</span><br><span class="line">245</span><br><span class="line">246</span><br><span class="line">247</span><br><span class="line">248</span><br><span class="line">249</span><br><span class="line">250</span><br><span class="line">251</span><br><span class="line">252</span><br><span class="line">253</span><br><span class="line">254</span><br><span class="line">255</span><br><span class="line">256</span><br><span class="line">257</span><br><span class="line">258</span><br><span class="line">259</span><br><span class="line">260</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">// 需要#import &quot;sys/utsname.h&quot;</span><br><span class="line"></span><br><span class="line">+ (NSString *)getDeviceIdentifier &#123;</span><br><span class="line"></span><br><span class="line">   **struct** utsname systemInfo;</span><br><span class="line"></span><br><span class="line">   uname(&amp;systemInfo);</span><br><span class="line"></span><br><span class="line">   // 获取设备标识Identifier</span><br><span class="line"></span><br><span class="line">   NSString *platform = [NSString stringWithCString:systemInfo.machine encoding:NSUTF8StringEncoding];</span><br><span class="line"></span><br><span class="line">   </span><br><span class="line"></span><br><span class="line">   // iPhone</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone1,1&quot;]) **return** @&quot;iPhone 2G&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone1,2&quot;]) **return** @&quot;iPhone 3G&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone2,1&quot;]) **return** @&quot;iPhone 3GS&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone3,1&quot;]) **return** @&quot;iPhone 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone3,2&quot;]) **return** @&quot;iPhone 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone3,3&quot;]) **return** @&quot;iPhone 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone4,1&quot;]) **return** @&quot;iPhone 4S&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone5,1&quot;]) **return** @&quot;iPhone 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone5,2&quot;]) **return** @&quot;iPhone 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone5,3&quot;]) **return** @&quot;iPhone 5c&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone5,4&quot;]) **return** @&quot;iPhone 5c&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone6,1&quot;]) **return** @&quot;iPhone 5s&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone6,2&quot;]) **return** @&quot;iPhone 5s&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone7,1&quot;]) **return** @&quot;iPhone 6 Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone7,2&quot;]) **return** @&quot;iPhone 6&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone8,1&quot;]) **return** @&quot;iPhone 6s&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone8,2&quot;]) **return** @&quot;iPhone 6s Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone8,4&quot;]) **return** @&quot;iPhone SE&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone9,1&quot;]) **return** @&quot;iPhone 7&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone9,2&quot;]) **return** @&quot;iPhone 7 Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,1&quot;]) **return** @&quot;iPhone 8&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,4&quot;]) **return** @&quot;iPhone 8&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,2&quot;]) **return** @&quot;iPhone 8 Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,5&quot;]) **return** @&quot;iPhone 8 Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,3&quot;]) **return** @&quot;iPhone X&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone10,6&quot;]) **return** @&quot;iPhone X&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone11,2&quot;]) **return** @&quot;iPhone XS&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone11,6&quot;]) **return** @&quot;iPhone XS MAX&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone11,8&quot;]) **return** @&quot;iPhone XR&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone12,1&quot;]) **return** @&quot;iPhone 11&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone12,3&quot;]) **return** @&quot;iPhone 11 Pro&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone12,5&quot;]) **return** @&quot;iPhone 11 Pro Max&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone12,8&quot;]) **return** @&quot;iPhone SE (2nd generation)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone13,1&quot;]) **return** @&quot;iPhone 12 mini&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone13,2&quot;]) **return** @&quot;iPhone 12&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone13,3&quot;]) **return** @&quot;iPhone 12 Pro&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone13,4&quot;]) **return** @&quot;iPhone 12 Pro Max&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,4&quot;]) **return** @&quot;iPhone 13 mini&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,5&quot;]) **return** @&quot;iPhone 13&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,2&quot;]) **return** @&quot;iPhone 13 Pro&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,3&quot;]) **return** @&quot;iPhone 13 Pro Max&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,6&quot;]) **return** @&quot;iPhone SE (3rd generation)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,7&quot;]) **return** @&quot;iPhone 14&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone14,8&quot;]) **return** @&quot;iPhone 14 Pro Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone15,2&quot;]) **return** @&quot;iPhone 14 Pro&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone15,3&quot;]) **return** @&quot;iPhone 14 Pro Max&quot;;</span><br><span class="line"></span><br><span class="line">   // iPhone 15 设备</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone15,4&quot;]) **return** @&quot;iPhone 15&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone15,5&quot;]) **return** @&quot;iPhone 15 Plus&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone16,1&quot;]) **return** @&quot;iPhone 15 Pro&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPhone16,2&quot;]) **return** @&quot;iPhone 15 Pro Max&quot;;</span><br><span class="line"></span><br><span class="line">   </span><br><span class="line"></span><br><span class="line">   // iPod</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod1,1&quot;])  **return** @&quot;iPod Touch 1&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod2,1&quot;])  **return** @&quot;iPod Touch 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod3,1&quot;])  **return** @&quot;iPod Touch 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod4,1&quot;])  **return** @&quot;iPod Touch 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod5,1&quot;])  **return** @&quot;iPod Touch 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod7,1&quot;])  **return** @&quot;iPod Touch 6&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPod9,1&quot;])  **return** @&quot;iPod Touch 7&quot;;</span><br><span class="line"></span><br><span class="line">   </span><br><span class="line"></span><br><span class="line">   // iPad</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad1,1&quot;])  **return** @&quot;iPad 1&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,1&quot;])  **return** @&quot;iPad 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,2&quot;]) **return** @&quot;iPad 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,3&quot;])  **return** @&quot;iPad 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,4&quot;])  **return** @&quot;iPad 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,5&quot;])  **return** @&quot;iPad Mini 1&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,6&quot;])  **return** @&quot;iPad Mini 1&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad2,7&quot;])  **return** @&quot;iPad Mini 1&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,1&quot;])  **return** @&quot;iPad 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,2&quot;])  **return** @&quot;iPad 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,3&quot;])  **return** @&quot;iPad 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,4&quot;])  **return** @&quot;iPad 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,5&quot;])  **return** @&quot;iPad 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad3,6&quot;])  **return** @&quot;iPad 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,1&quot;])  **return** @&quot;iPad Air&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,2&quot;])  **return** @&quot;iPad Air&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,3&quot;])  **return** @&quot;iPad Air&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,4&quot;])  **return** @&quot;iPad Mini 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,5&quot;])  **return** @&quot;iPad Mini 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,6&quot;])  **return** @&quot;iPad Mini 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,7&quot;])  **return** @&quot;iPad mini 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,8&quot;])  **return** @&quot;iPad mini 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad4,9&quot;])  **return** @&quot;iPad mini 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad5,1&quot;])  **return** @&quot;iPad mini 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad5,2&quot;])  **return** @&quot;iPad mini 4&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad5,3&quot;])  **return** @&quot;iPad Air 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad5,4&quot;])  **return** @&quot;iPad Air 2&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,3&quot;])  **return** @&quot;iPad Pro (9.7-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,4&quot;])  **return** @&quot;iPad Pro (9.7-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,7&quot;])  **return** @&quot;iPad Pro (12.9-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,8&quot;])  **return** @&quot;iPad Pro (12.9-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,11&quot;])  **return** @&quot;iPad 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad6,12&quot;])  **return** @&quot;iPad 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,1&quot;])  **return** @&quot;iPad Pro 2(12.9-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,2&quot;])  **return** @&quot;iPad Pro 2(12.9-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,3&quot;])  **return** @&quot;iPad Pro (10.5-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,4&quot;])  **return** @&quot;iPad Pro (10.5-inch)&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,5&quot;])  **return** @&quot;iPad 6&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,6&quot;])  **return** @&quot;iPad 6&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,11&quot;])  **return** @&quot;iPad 7&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad7,12&quot;])  **return** @&quot;iPad 7&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,1&quot;])  **return** @&quot;iPad Pro (11-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,2&quot;])  **return** @&quot;iPad Pro (11-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,3&quot;])  **return** @&quot;iPad Pro (11-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,4&quot;])  **return** @&quot;iPad Pro (11-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,5&quot;])  **return** @&quot;iPad Pro 3 (12.9-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,6&quot;])  **return** @&quot;iPad Pro 3 (12.9-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,7&quot;])  **return** @&quot;iPad Pro 3 (12.9-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad8,8&quot;])  **return** @&quot;iPad Pro 3 (12.9-inch) &quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad11,1&quot;])  **return** @&quot;iPad mini 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad11,2&quot;])  **return** @&quot;iPad mini 5&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad11,3&quot;])  **return** @&quot;iPad Air 3&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;iPad11,4&quot;])  **return** @&quot;iPad Air 3&quot;;</span><br><span class="line"></span><br><span class="line">   </span><br><span class="line"></span><br><span class="line">   // 其他</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;i386&quot;])   **return** @&quot;iPhone Simulator&quot;;</span><br><span class="line"></span><br><span class="line">   **if** ([platform isEqualToString:@&quot;x86_64&quot;])  **return** @&quot;iPhone Simulator&quot;;</span><br><span class="line"></span><br><span class="line">   </span><br><span class="line"></span><br><span class="line">   **return** platform;</span><br><span class="line"></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">The way to identify iPhone 15 devices</summary>
    
    
    
    <category term="IOS development" scheme="https://www.nablepart.com/categories/IOS-development/"/>
    
    
    <category term="DNS" scheme="https://www.nablepart.com/tags/DNS/"/>
    
    <category term="iphoe" scheme="https://www.nablepart.com/tags/iphoe/"/>
    
    <category term="Objective-C" scheme="https://www.nablepart.com/tags/Objective-C/"/>
    
    <category term="IOS" scheme="https://www.nablepart.com/tags/IOS/"/>
    
    <category term="xCode" scheme="https://www.nablepart.com/tags/xCode/"/>
    
  </entry>
  
  <entry>
    <title>Investing over 800 million US dollars to build a factory, photovoltaic leader Attese ventures into the US market again.</title>
    <link href="https://www.nablepart.com/a1a0882ec069/"/>
    <id>https://www.nablepart.com/a1a0882ec069/</id>
    <published>2023-10-06T15:03:40.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>On October 30th, JinkoSolar announced an investment of $839 million to build a factory in the US that will produce N-type solar cells for its own component production base. The new factory, covering about 480 acres,<br>will be located in Indiana and is expected to be completed by the end of 2025 with an annual production capacity of 5 GW.</p><p>This marks another milestone for JinkoSolar in the US market. In June of this year, shortly after returning to the A-share market, the company announced plans to establish a photovoltaic component production base in<br>Texas, with an investment of over $250 million and an expected annual production capacity of 5 GW after it starts operations by the end of this year. This is the first time JinkoSolar has established a manufacturing<br>plant in the US.</p><p>Two months after announcing its first US factory, in August of this year, JinkoSolar secured a long-term sales agreement for 7 GW of photovoltaic components, which will be supplied to one of the world’s largest power<br>service providers, French utility company EDF. This means that in terms of order size and partners, JinkoSolar has taken another solid step forward on its path to enter the US market.</p><p>In fact, JinkoSolar is just one of many Chinese companies entering the US market. Since the beginning of this year, leading domestic photovoltaic companies have successively announced plans to invest in photovoltaic<br>component projects in the US. First, in January of this year, JA Solar announced plans to establish its first factory in Phoenix, Arizona, to produce high-performance solar modules with a maximum annual production<br>capacity of 2 GW. Trina Solar followed suit and announced in March that it would partner with US clean energy developer Invenergy to build its first 5 GW photovoltaic component factory in Ohio.</p><p>Some are entering the market for the first time, while others are continuously increasing their investments. In April of this year, JinkoSolar, which had already established a factory in the US, announced that it would<br>invest $81.37 million in a 1 GW solar module production line in Florida, which was put into operation in 2018. In the second half of this year, TCL’s subsidiary Maxeon and Tianhe Solar Energy also announced plans to invest<br>in photovoltaic component factories in the US. For a time, domestic photovoltaic giants all looked towards the US and building factories there became a trend.</p><p>The Chinese “photovoltaic army’s” collective entry into the US market is mainly due to trade policies and market potential.</p><p>On the policy side, the Biden administration introduced the “Inflation Reduction Act” in 2022, which imposes strict requirements on the localization manufacturing ratio and subsidy scope of photovoltaic component products.<br>For example, in order to obtain an additional 10% subsidy, it is required that 100% of the<br>steel used in photovoltaic components and other “manufactured products” come from the United States, and the proportion of domestic manufacturing of finished products should exceed 40%. For projects starting construction<br>after 2026, the standard will be raised to 55%.</p><p>ATS stated in the announcement that this new project can enjoy preferential policies from the US government.</p><p>For a long time, Chinese companies have chosen to establish factories in countries such as Thailand and Vietnam and then sell their products to the United States. However, the US Department of Commerce began<br>anti-circumvention investigations on photovoltaic products from the four Southeast Asian countries last year, and publicly announced the investigation results in August this year: several companies including Longi Solar,<br>JA Solar, ATS, and BYD Hong Kong were provisionally ruled to have “circumvention behavior”.</p><p>Although many companies have stated that this ruling has not had a significant impact on their business, it has once again sent a strong signal to companies: the need to layout overseas supply chains.</p><p>Another direct reason for establishing factories in the United States is the huge growth potential of the US photovoltaic market. According to Bloomberg New Energy Finance’s prediction, the United States will add 358 GW of<br>photovoltaic installations from 2023 to 2030. A report released by global natural resources consulting firm Wood Mackenzie in September shows that the United States added 11.7 GW of photovoltaic installations in the first<br>half of 2023, a year-on-year increase of 37.7%. It is expected to add 20 GW in the second half of the year, equivalent to the total installation volume in 2022.</p><p>Domestic companies are well aware of the incremental space in the US photovoltaic market. Jinko Solar stated during recent investor research that if trade policies are relaxed next year and domestically manufactured<br>photovoltaic products are gradually put into production, supply chain pressure will be relatively relieved, and it is expected that the demand for new installations in the United States will exceed 40 GW in 2024.</p><p>Since the beginning of this year, prices in the domestic photovoltaic industry chain have declined significantly, and overseas business has become an important driving force for future growth of companies. Despite facing<br>uncertainties in going global, no company is willing to give up in the face of huge market demand.</p>]]></content>
    
    
    <summary type="html">Domestic photovoltaic companies in China have started a wave of building factories in the United States this year, with Attes leading the way.</summary>
    
    
    
    <category term="Finance" scheme="https://www.nablepart.com/categories/Finance/"/>
    
    
    <category term="US market" scheme="https://www.nablepart.com/tags/US-market/"/>
    
    <category term="investment" scheme="https://www.nablepart.com/tags/investment/"/>
    
    <category term="photovoltaic" scheme="https://www.nablepart.com/tags/photovoltaic/"/>
    
    <category term="trade policy" scheme="https://www.nablepart.com/tags/trade-policy/"/>
    
    <category term="Market potential" scheme="https://www.nablepart.com/tags/Market-potential/"/>
    
  </entry>
  
  <entry>
    <title>What are the advantages of GoDaddy</title>
    <link href="https://www.nablepart.com/179249c6fa98/"/>
    <id>https://www.nablepart.com/179249c6fa98/</id>
    <published>2023-10-03T12:11:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>GoDaddy is a globally recognized domain name registrar and web hosting service provider.</p><p>Advantages of GoDaddy include:</p><ul><li><p>High market share: GoDaddy is one of the largest domain name registrars in the world, with a market share of over 30%.</p></li><li><p>Affordable: GoDaddy’s domain name registration and web hosting services are relatively inexpensive, especially for individuals and small businesses.</p></li><li><p>Convenient and easy to use: GoDaddy’s user interface is simple and easy to use, making it easy for even people with no technical background to register domain names and open websites.</p></li><li><p>Diversified services: GoDaddy offers a wide range of online business services that can meet the various needs of customers.</p></li><li><p>High security: GoDaddy offers a variety of security solutions that can protect customers’ websites from hacker attacks and malware.</p></li></ul>]]></content>
    
    
    <summary type="html">Five advantages of GoDaddy</summary>
    
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/categories/Domain-name/"/>
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/tags/Domain-name/"/>
    
    <category term="Godaddy" scheme="https://www.nablepart.com/tags/Godaddy/"/>
    
    <category term="Github" scheme="https://www.nablepart.com/tags/Github/"/>
    
    <category term="CNAME" scheme="https://www.nablepart.com/tags/CNAME/"/>
    
    <category term="DNS" scheme="https://www.nablepart.com/tags/DNS/"/>
    
  </entry>
  
  <entry>
    <title>站在人工智能的肩膀上，Caduceus如何推动Web3革命？</title>
    <link href="https://www.nablepart.com/05f298d327f7/"/>
    <id>https://www.nablepart.com/05f298d327f7/</id>
    <published>2023-10-03T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="站在人工智能的肩膀上，Caduceus如何推动Web3革命？"><a href="#站在人工智能的肩膀上，Caduceus如何推动Web3革命？" class="headerlink" title="站在人工智能的肩膀上，Caduceus如何推动Web3革命？"></a><strong>站在人工智能的肩膀上，Caduceus如何推动Web3革命？</strong></h2><p><img src="https://cdn-images-1.medium.com/max/3000/1*pEXZ-gyCl3k0vHyEk-av_w.png"></p><p>Caduceus利用AIGC的先进技术和分布式实时边缘渲染，生成多样化的模态渲染数据内容。这种创新不仅降低了内容开发成本，还提高了内容创作的效率。通过采用分布式实时渲染的微服务架构，Caduceus解决了纯客户端架构所面临的挑战，这些架构需要大量的客户端计算资源和高性能设备。此外，它还克服了纯云架构中高昂的云服务成本和网络带宽限制导致的延迟问题。</p><p>Caduceus通过其基础设施、工具和工作流程赋予用户准备数据、构建、训练和部署任何应用的机器学习模型的能力。Caduceus的AIGC工具使开发人员能够快速部署和微调预训练的AI模型，将设置和利用这些NLP模型所需的时间从几周缩短到几分钟。</p><h2 id="人类反馈增强学习（RLHF）"><a href="#人类反馈增强学习（RLHF）" class="headerlink" title="人类反馈增强学习（RLHF）"></a><strong>人类反馈增强学习（RLHF）</strong></h2><p>RLHF是ChatGPT中使用的一种算法，用于提高其响应的准确性。通过整合人类反馈，RLHF利用强化学习技术不断学习和完善ChatGPT的对话能力。这个迭代过程改进了对话的整体质量和效果。从长远来看，RLHF有潜力促进可适应的人工智能系统在各个行业的发展。它的应用范围涵盖资源管理、用户服务和临床决策支持，增强了用户与人工智能之间的信任，推动商业结果，加速技术的采用以提高效率和经济产出。</p><p>Caduceus处于人工智能进展的前沿，利用AI预训练模型和RLHF建立了先进的AI DAppstore。这个创新的市场作为一个综合性的AI工具和资源集合，包括文案写作、图像生成、音视频编辑、开发框架和AI模型训练等各种类别。通过为用户提供快速访问量身定制的AI解决方案，AI DAppstore简化了找到特定需求的适用AI工具的过程。凭借其强大的计算能力和庞大的数据集，AI DAppstore通过有价值的反馈循环为AI应用和服务的优化和改进做出贡献。</p><h2 id="Caduceus未来的AI生态功能布局"><a href="#Caduceus未来的AI生态功能布局" class="headerlink" title="Caduceus未来的AI生态功能布局"></a><strong>Caduceus未来的AI生态功能布局</strong></h2><p>随着生成算法、大型模型和多模态技术等AI技术不断发展和进步，生成型人工智能的领域正在经历从感知和理解到生成和创造的重大转变。AIGC在数字内容和艺术表达方面的社会影响力在各个领域和行业都是显而易见的。这促使新的技术范 paradigm 和价值框架涌现，可能为人工通用智能（AGI）铺平道路。</p><p>在技术和研发的支持下，Caduceus在AI领域的当前布局超越了区块链的范畴，扩展到了元宇宙、游戏、内容、社交、金融和教育等传统行业。</p><ol><li>区块链AI：Caduceus在AI领域超越了区块链行业的界限。通过利用AI技术资源的整合和“AI+区块链”的推动力量，它构建了一个分布式的区块链生态系统，并为开发者提供更多的计算支持。</li></ol><p>在AI生态系统的发展中，Caduceus将向AI开发者开放其基本的AI算法能力，赋予其更多样化的应用需求。同时，它还将整合区块链技术，为上层提供全面的能力。</p><ol start="2"><li><p>游戏AI：Caduceus利用数据挖掘和机器学习算法来理解元宇宙游戏中的网络结构特征和用户的社交行为，进一步优化用户体验。在安全方面，可以利用AI来过滤数据，建立高质量的游戏开发环境。</p></li><li><p>内容AI：基于内容生态系统，Caduceus将扩大其在内容整合、推荐和分发方面的运营领域。</p></li><li><p>元宇宙社交AI：Caduceus提供自然语言翻译服务，包括语音识别和准确的文本识别OCR等技术，以满足各种场景和行业的翻译需求。</p></li></ol><p>随着AIGC成为一股变革力量，AI领域踏上新的轨迹，预示着即将到来的时代。Caduceus作为元宇宙平台中的重要角色，努力推动建立和培育AI生态系统。凭借大型模型和多模态能力等基础技术，Caduceus旨在成为“区块链+AI”技术领域的重要角色，赋予各个行业力量。通过让开发者发挥其全部潜力，并促进与众多利益相关者的合作，Caduceus致力于构建一个广阔、全面、互联的区块链+AI生态系统。通过这种共同努力，该平台旨在将传统行业的效率和生产力提升到新的高度。</p><h2 id="加入我们的社区："><a href="#加入我们的社区：" class="headerlink" title="加入我们的社区："></a>加入我们的社区：</h2><p>**Caduceus官网: <a href="https://www.caduceus.foundation/">**https://www.caduceus.foundation</a></p><p>**Discord: <a href="https://discord.com/invite/caduceus">**https://discord.com/invite/caduceus</a></p><p>**Twitter: <a href="https://twitter.com/Caduceus_CMP">**https://twitter.com/Caduceus_CMP</a></p><p>**Telegram: <a href="http://t.me/CaduceusMetaverse">**http://t.me/CaduceusMetaverse</a></p><p>**Instagram: <a href="https://www.instagram.com/caduceus_cmp/">**https://www.instagram.com/caduceus_cmp&#x2F;</a></p>]]></content>
    
    
    <summary type="html">Caduceus借助人工智能和分散式渲染推动Web3革命。它通过AI工具、强化学习和AI DAppstore赋能用户，革新游戏、内容和金融等行业。加入Caduceus社区，体验互联的区块链+AI生态系统。</summary>
    
    
    
    
    <category term="人工智能" scheme="https://www.nablepart.com/tags/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/"/>
    
    <category term="区块链" scheme="https://www.nablepart.com/tags/%E5%8C%BA%E5%9D%97%E9%93%BE/"/>
    
    <category term="Caduceus" scheme="https://www.nablepart.com/tags/Caduceus/"/>
    
    <category term="Web3革命" scheme="https://www.nablepart.com/tags/Web3%E9%9D%A9%E5%91%BD/"/>
    
    <category term="分散式渲染" scheme="https://www.nablepart.com/tags/%E5%88%86%E6%95%A3%E5%BC%8F%E6%B8%B2%E6%9F%93/"/>
    
    <category term="强化学习" scheme="https://www.nablepart.com/tags/%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0/"/>
    
    <category term="AI DAppstore" scheme="https://www.nablepart.com/tags/AI-DAppstore/"/>
    
    <category term="游戏AI" scheme="https://www.nablepart.com/tags/%E6%B8%B8%E6%88%8FAI/"/>
    
    <category term="内容AI" scheme="https://www.nablepart.com/tags/%E5%86%85%E5%AE%B9AI/"/>
    
    <category term="元宇宙" scheme="https://www.nablepart.com/tags/%E5%85%83%E5%AE%87%E5%AE%99/"/>
    
    <category term="AI生态系统" scheme="https://www.nablepart.com/tags/AI%E7%94%9F%E6%80%81%E7%B3%BB%E7%BB%9F/"/>
    
  </entry>
  
  <entry>
    <title>Centralized Resolution Management - Cloud Analytics</title>
    <link href="https://www.nablepart.com/229778f287ec/"/>
    <id>https://www.nablepart.com/229778f287ec/</id>
    <published>2023-10-01T15:10:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Foreword"><a href="#Foreword" class="headerlink" title="Foreword"></a>Foreword</h2><p>CocoaPods cloud analytics is one of a series of cloud infrastructures provided by the Developer Tools department of ByteDance’s Client Infrastructure team. The Developer Tools team is committed to building the next-generation mobile cloud infrastructure, which optimizes the quality, cost, security, efficiency and experience of the development and delivery process of the company’s various businesses by using technologies such as cloud IDE, distributed build, compilation and linking, and so on. The Developer Tools team is dedicated to building the next-generation mobile cloud infrastructure. Through cloud IDE technology, distributed build, compilation and linking, the team optimizes the quality, cost, security, efficiency, and experience of the development and delivery process of the company’s various businesses.</p><h2 id="I-Background"><a href="#I-Background" class="headerlink" title="I. Background"></a>I. Background</h2><p>CocoaPods has become a standard dependency management tool in the iOS industry under the iOS componentized development model. However, as business capabilities continue to expand and iterate, the number of components continues to increase, resulting in a sharp increase in the complexity of the App project, a serious decline in the efficiency of dependency management, and even the emergence of potential stability problems. In order to manage the component dependencies of large-scale projects faster and more stably, iOS build department has created a set of centralized dependency management service - Cloud Dependency Analysis, which converges the dependency management process at the level of the tool chain, accelerates the speed of resolution, and aggregates the failure problems.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292038361.webp"></p><h2 id="II-What-is-Cloud-Dependency-Analysis"><a href="#II-What-is-Cloud-Dependency-Analysis" class="headerlink" title="II. What is Cloud Dependency Analysis?"></a>II. What is Cloud Dependency Analysis?</h2><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292038808.webp"></p><p>CocoaPods-based iOS project management, every time you execute pod install, you need to synchronize the component index information Spec repository to the local, generally rely on the git repository clone, and then read the Podfile, Lockfile, and other configuration files, and then start to enter the dependency analysis, dependency download, project integration and other steps.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292039803.webp"></p><p>Cloud Analysis is a cloud service that relies on ByteDance’s self-developed product repository platform, uploads local project build materials through the toolchain, quickly returns dependency analysis results, and centrally manages iOS project dependencies. The cloud analysis service will rely on the product library to provide all component index information; and through the cloud analysis local tools in the process of environment preparation to obtain local engineering materials, unified upload to the cloud for dependency resolution tasks, the cloud with a series of optimization means and server performance, rapid return of a resolution result, the local receive the resolution result after the subsequent dependency download and engineering integration process.</p><p>The access to the cloud analysis is also extremely easy, no need to increase the configuration file, there is no need to modify the original research and development model, in a <strong>non-intrusive, no access costs, does not affect the development process</strong> way to access to the project. The only thing you need to do is to add the RubyGem plugin for cloud analytics to the CocoaPods toolchain and add a control switch parameter in the pod install command to enable optimization.</p><h2 id="3-How-to-speed-up-resolution"><a href="#3-How-to-speed-up-resolution" class="headerlink" title="3. How to speed up resolution"></a>3. How to speed up resolution</h2><h2 id="3-1-Product-Repositories-Full-Component-Indexing-Information"><a href="#3-1-Product-Repositories-Full-Component-Indexing-Information" class="headerlink" title="3.1 Product Repositories (Full Component Indexing Information)"></a>3.1 Product Repositories (Full Component Indexing Information)</h2><p>The iOS development system based on Cocoapods is very sloppy in managing iOS products, directly using different git repositories as index repositories for build products (podspec files), which plays the role of a product repository. With the complexity of iOS project, the increase of git repositories leads to the difficulty of querying the index information of components and the slow synchronization speed of repositories.BitNest product repository is a self-developed product management system of the company’s mobile terminal, which is used to manage the build products generated in the process of continuous integration. The product repository centrally manages podspec sources separated in various git repositories, and through a complete set of CLI commands, it can quickly pull and query podspec information. The cloud analytics service leverages the capabilities of the repository to provide real-time access to a full set of podspec sources in the cloud. Each CocoaPods task does not need to update the podspec source information, nor does it need to update the podspec source information in a timely manner in order to find the podspec information for the latest release of the component.</p><h3 id="3-2-Caching"><a href="#3-2-Caching" class="headerlink" title="3.2 Caching"></a>3.2 Caching</h3><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292040291.webp"></p><p>Before describing the caching mechanism, let’s briefly describe the flow of dependency analysis in pod install. At the first execution (ignoring the lockfile), CocoaPods will read the specific plugin, source, target, pod, etc. from the Podfile via DSL, and create the corresponding objects to complete the preparation phase. In each Target object, each pod is created as a Dependency object, and there are specific Requirements objects. All Dependency objects of all Target objects are added to the stack one by one, and a Graph dependency node graph is created. Each Dependency object goes to the corresponding Source repository to find the corresponding pod according to its Requirements. If there is no repository information in the Requirements, it will traverse from the public Source of the podfile to find the corresponding pod. After finding the corresponding pod, it will first create a version list and find out all the pods that meet the Requirements from the version list, and then read the content of the corresponding podspec file. The resolution will create new Dependencies for the implicit pods in the Spec object and add them to the analysis stack and Graph. If a version of the Spec does not meet the Requirements of another dependency with the same name when traversing the Graph dependency graph, it will be withdrawn from the stack and the dependency graph until all Dependencies have been found to be corresponding to the Spec object, and the analysis is complete. As you can see, in CocoaPods dependency management process, there are a lot of repetitive object creation and sorting and searching process, which greatly reduces the development efficiency. Imagine that the objects required by CocoaPods tasks are always kept in a ready state, and whenever a task request is received, the dependency analysis work is executed immediately, and the results can be returned quickly. The cloud analysis service centralizes all CocoaPods dependency management tasks and builds an object caching mechanism for repetitive tasks. The lazy loading model is used to cache new objects and immediately enter the dependency resolution process after the next task comes in.</p><p><strong>3.2.1 Sorting Version Cache</strong></p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292040071.webp"></p><p>When analyzing each pod, in order to get the latest version of the pod dependency, CocoaPods creates a corresponding Version object for all the version numbers in the source repository and sorts them. Currently, most of the internal product versions have reached tens of thousands, and without specifying the source, both binary and source versions will be sorted and read, and finally get a version that meets the requirements and is the latest version. Since component version numbers are separated by “.” and “-“ segments, most component versions have 4 or 5 fields or more. This results in tens of thousands of components being sorted, and each sorting comparison needs to be traversed more than 4 times, increasing the time complexity by several times and greatly increasing the time consumption.</p><p>In order to obtain an organized version list faster, the repository service maintains a version file of all pod components sorted from largest to smallest; every time a new pod version is added, the repository inserts a new version into the file; and when it is deleted, it deletes the corresponding version field.</p><p>With the ordered version file, the main purpose of adding the Version cache in Cloud Analytics is to maintain the version segmentation information in the Version object all the time, so that you can quickly determine whether the current Version meets the requirements of the dependency. Version Caching can speed up the dependency management process by about 10-12 seconds.</p><p>Cloud analysis without version caching will prioritize reading the data in the version file to get an ordered list of versions directly; if the length of the version list does not match the length of the component version directory in the source, it will fall back to the original method (version list error to ensure the correctness of the analysis). In the case of a cache hit, it is also necessary to determine whether the length of the cached version list is equal to the length of the pod version catalog (there is a new version added, and the cache is not added), then the difference version will be looked up from the version list array, and the cache will be corrected.</p><p><strong>3.2.2 Spec Object Cache</strong></p><p><a href="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/8fa11bb6e4cc4ca19f592d971807f2dc~tplv-k3u1fbpfcp-zoom-in-crop-mark:3024:0:0:0">https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/8fa11bb6e4cc4ca19f592d971807f2dc~tplv-k3u1fbpfcp-zoom-in-crop-mark:3024:0:0:0</a>. awebp</p><p>When CocoaPods looks for a podspec that meets the dependency requirements from a sorted version, it reads in the contents of all the versions of the podspec that meet the dependency requirements and does a dependency resolution traversal. If no version is specified, all versions of the podspec file will be read, and if no source is specified, all pods where the source exists will be read. For 10,000 podspec files to be read, it takes about 30 seconds (depending on the disk).</p><p>Cloud Analytics caches the contents of the podspec file for each analysis task IO read. When the next task fetches the Spec object, it can directly get the corresponding Spec object according to the three fields: source, pod_name, and version.</p><p>Meanwhile, in order to ensure the correctness of the Spec and prevent the Spec from changing its content without changing its version, the cache of Spec objects exists in the form of a multi-dimensional array, and by judging the modification time of the podspec file, the contents of the podspec in the cache can be updated to the latest submission, so as to make sure that the checksum calculation is the same as that of the local pulling dependency analysis, and to realize the correctness of the cloud dependency analysis. This ensures that the checksum calculation is the same as the value calculated by the local pull-bin dependency analysis. In the future, we will also increase the number of Spec cache hits, Spec object expiration time, etc. to realize the Spec cache cleanup strategy.</p><p><strong>3.2.3 Cache Reuse</strong>!</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292041103.webp"></p><p>The cloud analysis will also cache the analysis results, so that the next time the same analysis task can be reused directly. The cloud will do a global hash calculation and a segmented hash calculation on the material after fetching the material once, and cache the <code>Complete Analysis Result</code> and <code>Analysis Result Graph</code> respectively. For the next analysis task, if the materials are exactly the same, we can directly return a complete analysis result; if there is no match, we will calculate the first-level <code>platform information key</code> by some target, platform and other information to determine the specific app information; then we will calculate the hash value of all the component dependencies under the target one by one, and obtain the second-level <code>hash array Then calculate the hash value of all the component dependencies under the target one by one to get the second level </code>hash array<code> key</code>, which corresponds to a graph value of the analyzed result; through fuzzy matching to match the key of the hash array, we can match the similar graphs with the same and the highest number of dependencies, and then replace the locked dependencies in the material to speed up the analysis. Of course, the fuzzy matching capability has some limitations, and cannot speed up the analysis of the originally uploaded lockfile material.</p><h3 id="3-3-Material-Pruning"><a href="#3-3-Material-Pruning" class="headerlink" title="3.3 Material Pruning"></a>3.3 Material Pruning</h3><p>Cloud analysis transforms CocoaPods objects into byte streams for transmission. The uploaded material and analysis results are as follows:</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292041729.webp"></p><p><strong>1. Uploading material</strong></p><p>The cloud analysis toolchain will take the Podfile object, the Molinillo Graph object generated by the lockfile, the specified Source object, the plugin adapter, and all the external source Specs objects (specifically, the pre-release objects of the specified git, path, and podspec) as upload materials. In fact, cloud analysis does not need all the information of these local objects, and can prune these objects, for example, the Podfile object only needs the chain list of target_definitions; the Molinillo Graph object only needs the nodes corresponding to all pods, and does not need to record the logs of the operating nodes; the Source object only needs to know the name and the repo_definitions; the Source object only needs to know the name and the repo_definitions of the pods. Molinillo Graph object only needs all the nodes corresponding to the pods and does not need to record the logs of the operating nodes; Source object only needs the name and repo_dir, and so on. Among them, some resolution optimization plug-ins need to transfer some extra configuration Config objects through plug-in adapters.</p><p><strong>2. Result Return</strong></p><p>The result returned by the cloud analysis is a hash object with Target as the key and the corresponding Specs array as the value. Before the result is returned, the Source of all Specs will be pruned first. Since the Source corresponding to each Spec is only used in the subsequent process to classify the url field and generate the lock file. Therefore, other useless fields of the Source object can be deleted to minimize the transmission content and speed up the response time. After pruning the returned results, the size of <strong>transferred content can be reduced by about 10MB or more</strong>.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292041233.webp"></p><h3 id="3-4-Resolution-Policy-Compatibility"><a href="#3-4-Resolution-Policy-Compatibility" class="headerlink" title="3.4 Resolution Policy Compatibility"></a>3.4 Resolution Policy Compatibility</h3><p>To ensure the correctness and uniqueness (single truth) of the resolution results, the cloud analysis is compatible with the toolchain of each CocoaPods resolution strategy optimization within ByteDance. According to the construction configuration parameters in the project, the local plug-in of Cloud Analytics identifies the specific resolution strategy and passes it to the Cloud Analytics server to activate the corresponding resolution strategy algorithm for fast resolution. At the same time, combining the existing resolution optimization strategy and the cloud optimization acceleration mechanism, the dependency management process of CocoaPods reaches <strong>second return</strong>.</p><h2 id="IV"><a href="#IV" class="headerlink" title="IV."></a>IV.</h2><p>This article mainly shares a CocoaPods cloud-based optimization scheme within ByteDance, which converges and reuses a large number of repetitive iOS engineering pipeline build tasks, accelerates the dependency management rate under the premise of guaranteeing the correctness of dependency resolution, and improves R&amp;D efficiency. At present, the cloud analysis service has completed the first phase of development and has been used by several core production lines within the company. For example, after the headline accessed the cloud analysis service, the time consumption of pipeline’s <strong>dependency analysis phase was accelerated by more than 60%</strong>. In the future, the download optimization of CocoaPods and project caching service are also under technical exploration, and the related technical articles will be shared one after another, please look forward to it!</p>]]></content>
    
    
    <summary type="html">CocoaPods cloud analysis capability is one of the cloud-based infrastructure provided by ByteDance&#39;s Client Infrastructure under the Developer Tools department. The Developer Tools team is dedicated to building the next generation of mobile cloud-based infrastructure. Through technologies such as cloud IDE, distributed building, compilation, and linking, the team optimizes the quality, cost, security, efficiency, and user experience of various business development and delivery processes within the company.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="Cloud" scheme="https://www.nablepart.com/tags/Cloud/"/>
    
    <category term="ByteDance" scheme="https://www.nablepart.com/tags/ByteDance/"/>
    
    <category term="CocoaPods" scheme="https://www.nablepart.com/tags/CocoaPods/"/>
    
    <category term="Cloud Analytics" scheme="https://www.nablepart.com/tags/Cloud-Analytics/"/>
    
  </entry>
  
  <entry>
    <title>Teach you to use GoDaddy to create a website</title>
    <link href="https://www.nablepart.com/36de8c325100/"/>
    <id>https://www.nablepart.com/36de8c325100/</id>
    <published>2023-10-01T01:21:08.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-Account-creation"><a href="#1-Account-creation" class="headerlink" title="1. Account creation"></a>1. Account creation</h2><p>To set up a website with GoDaddy, you first need to create an account:</p><ol><li><p>Visit the <a href="https://www.godaddy.com/zh-sg">GoDaddy website creation page</a></p></li><li><p>Sign up for GoDaddy with an email address, Google account, or Facebook account.</p></li><li><p><strong>Select Start for Free to build your website for free.</strong></p></li></ol><blockquote><p>Note: The free plan only provides website building for one second-level domain name, e.g. <a href="http://yoursitename.godaddysites.com/">http://yoursitename.godaddysites.com</a>. If you want to create a website with your own top-level domain name, you need to upgrade to the paid plan.</p></blockquote><h2 id="2-Select-the-industry-and-website-name"><a href="#2-Select-the-industry-and-website-name" class="headerlink" title="2. Select the industry and website name"></a>2. Select the industry and website name</h2><p>Select the business category.</p><p>If industry is not available, then feel free to choose the closest industry. Just be sure to choose a template that matches your business, as GoDaddy will suggest templates that are appropriate for the type of business.</p><p>The next step is to choose a site name, this is the name that visitors will see when they search for your site, or you can change it later, don’t spend too much time here.</p><p>After choosing the site title, GoDaddy will create a sample homepage.</p><h2 id="3-Selection-of-themes"><a href="#3-Selection-of-themes" class="headerlink" title="3. Selection of themes"></a>3. Selection of themes</h2><p>It is possible to use the basic layout generated in the previous step when selecting an industry, and then, edit it to fit the brand, or choose a new layout.</p><p>A theme is just a layout to which text, pages and images can be added, even after publishing. If it doesn’t fit, replace it later on.</p><p>Click Template in the upper right corner of the dashboard to change the theme and Try a New Look to see more templates.</p><p>You can change the colors, fonts and buttons to match your style. Then, click select in the lower left corner to activate the theme.</p><h2 id="4-Customize-your-website"><a href="#4-Customize-your-website" class="headerlink" title="4. Customize your website"></a>4. Customize your website</h2><p>GoDaddy has several templates generated based on the industry chosen. If you want to create a website that suits your business, then you may want to design pages, sections, images, and text that fit the brand’s style.</p><blockquote><p>In this section, and in the design of the site’s features, Godaddy also gives the user enough convenience, at most a few trials, can be done. However, it may still be partially difficult for the uninitiated. Then, recommend the use of elfsight widget tool, the tool will be all the site features are made into a widget, according to their own business needs, the choice of widgets can be.<br>For example, making various forms on the site, inserting Google reviews to enhance sales, etc.</p></blockquote><p>The method&#x2F;steps are as follows:</p><ul><li>Add picture</li></ul><p>Once the image is selected, click Update to the right of the image to add the image to the site. After the image is added, alternative text (i.e., the image Alt attribute) is added to describe the image to the user.</p><p>Click on the image and the screen displays the Edit menu to add filters and effects and crop the image.</p><p>Alternatively, a corporate logo can be added as follows:</p><p>Click on the website name and then, click Upload or create a logo to select the logo from your computer, click the Insert button and the logo appears.</p><p>Then, click Done to continue to the next step</p><ul><li>Add text</li></ul><p>This step is very simple.</p><p>Click on the sample text to highlight, delete or add. There are several text editing options available: italics, bold, numbered lists, bullets, and size.</p><p>However, do not add a new text box to the site, as GoDaddy does not allow clicking and dragging text boxes, only adding a new section.</p><ul><li>Add new section</li></ul><p>The site includes a title, about us, contact us, privacy policy and terms and conditions.</p><p>To make the site unique, add more features such as menus, blogs, calendars, social feeds, video and photo galleries. To do this, click on the plus (+) icon on the dashboard to remove unwanted sections as well.</p><p>GoDaddy allows you to add up to 20 sections per page and change the elements in the header and footer areas. However, no additional headers or footers can be added.</p><p>Click on the existing text to change the title or subtitle.</p><p>If you want to replace the title image, click on the image itself and then click on the Cover Media button to replace the title image. Then, click Change image to add an image or set of stock images, and a Feed to describe this image.</p><p>Edit the call-to-action (CTA) button:</p><p>Click the button itself to open a sub-menu to edit the action text, link to the site URL, and reposition the button.</p><p>Visitors will be looking for information such as contact information, terms of service, privacy policy, or links to social media accounts in the footer, so these pages and links need to be created.</p><p>Click Accent to change the background color and enter information in the fields detailing the company name, address, and phone number.</p><p>Next, click Social Accounts to link the account to the website.</p><ul><li>Add new page</li></ul><p>After adding text and images to your website, you have to create the page structure. To do this, click Website in the upper right corner of the dashboard to view the page structure.</p><p>Click the plus ( + ) icon on the Site Navigation menu to add a new page, rename, copy and delete the created page.</p><p>It is also possible to add external links and drop-down menus. If you want to reorder the pages, click the icon next to the plus ( + ) icon.</p><p>GoDaddy also allows a page to be set to private, making the content viewable only for specific users. For example, it can be set to only be available to users who have a code, link, or account.</p><h2 id="5-Manage-Settings"><a href="#5-Manage-Settings" class="headerlink" title="5. Manage Settings"></a>5. Manage Settings</h2><p>The Setting tab contains site profiles and tracking features that share business information with visitors and help optimize the site, which can be edited and enabled and new features added.</p><h3 id="Website-introduction"><a href="#Website-introduction" class="headerlink" title="Website introduction"></a>Website introduction</h3><p>All details about the site are included in the site profile.</p><ul><li><p>Basic Information: Fill in information such as website name, email address, phone number and physical address.</p></li><li><p>Social Media Links: connect to social media accounts including Facebook, Instagram, LinkedIn, TikTok, Pinterest, Yelp, and Discord.</p></li><li><p>Favicon: Upload an icon that you want visitors to see next to the website name.</p></li><li><p>Be indexed by Google and show up in Google search results: click Start</p></li></ul><p>Optimizing to optimize the site for keywords. goDaddy helps identify keywords that are relevant to the site.<br>Site History</p><h3 id="Analysis-and-tracking"><a href="#Analysis-and-tracking" class="headerlink" title="Analysis and tracking"></a>Analysis and tracking</h3><p>This setting tracks and analyzes visitors’ interests to leverage Google, Facebook, Pinterest and Google AdSense.</p><p>So, if already subscribed to a tracker or analytics, then add their IDs.</p><p>Additionally, set the cookie banner to inform website visitors that cookies are being used, and you can customize the default cookie message to suit your industry.</p><h2 id="6-Enhance-your-website"><a href="#6-Enhance-your-website" class="headerlink" title="6. Enhance your website"></a>6. Enhance your website</h2><p>GoDaddy offers advanced features such as appointment scheduling, pop-ups, and social media integration, and also allows users to run an online store and customize domain names.</p><p>Learn how to enhance your site with the available features below.</p><p>Click the Popup button at the bottom of the site to add a popup message, edit images, text, and choose where the link will appear on the page.</p><p>For appointment scheduling, click on “Services” under the “Website” tab.</p><p>Here, you can include images and details of pricing, staff availability, Zoom integration and advance booking. Fill in the fields and click save.</p><p>Additionally, this is a great place to integrate your social media accounts with your website.</p><p>Add buttons to direct visitors to social media pages with a single click.</p><p>To do this, click Add a New Section and select social .</p><p>If you want to run an online store, you can add products and their names, sizes, categories, descriptions, shipping details, etc., or import these details from a CSV file or from other platforms such as Square and eBay.</p><p>However, GoDaddy does not allow receiving cryptocurrency payments.</p><h2 id="7-Preview-and-publish-the-website"><a href="#7-Preview-and-publish-the-website" class="headerlink" title="7. Preview and publish the website"></a>7. Preview and publish the website</h2><p>Before the website goes live, check how it looks to visitors on desktop and mobile devices.</p><p>Browse through each page to make sure that information such as contact information, hours of operation, etc. is correct, and check for spelling errors, broken links, and the general layout of the site.</p><p>If corrections are needed, click Edit Site. If everything is ready, click Publish to publish the site.</p><h2 id="8-Customize-the-domain-name"><a href="#8-Customize-the-domain-name" class="headerlink" title="8. Customize the domain name"></a>8. Customize the domain name</h2><p>To publish a site on a custom domain, either connect to your existing domain or purchase a new domain from GoDaddy.</p><p>If you purchased the domain from GoDaddy, click Publish to be taken to the Choose a Domain page and then connect the domain to the site.</p><h3 id="How-to-buy-a-domain-name-from-GoDaddy"><a href="#How-to-buy-a-domain-name-from-GoDaddy" class="headerlink" title="How to buy a domain name from GoDaddy:"></a>How to buy a domain name from GoDaddy:</h3><p>Select Choose a Domain and click Buy a domain only. After you find a suitable domain name, click Buy and make a payment as shown below.</p><p>If you have your own domain name, but not bought it here, then select I already have a domain and then, click on I have a domain outside of GoDaddy and follow the prompts.</p><p>After completing all payments and processes, the website will go live.</p>]]></content>
    
    
    <summary type="html">Teach you to use GoDaddy to create a website</summary>
    
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/categories/Domain-name/"/>
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/tags/Domain-name/"/>
    
    <category term="Godaddy" scheme="https://www.nablepart.com/tags/Godaddy/"/>
    
    <category term="Github" scheme="https://www.nablepart.com/tags/Github/"/>
    
    <category term="CNAME" scheme="https://www.nablepart.com/tags/CNAME/"/>
    
    <category term="DNS" scheme="https://www.nablepart.com/tags/DNS/"/>
    
  </entry>
  
  <entry>
    <title>The rebirth of Lords of the Fallen continues.</title>
    <link href="https://www.nablepart.com/0782dd4f5790/"/>
    <id>https://www.nablepart.com/0782dd4f5790/</id>
    <published>2023-09-30T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h2 id="2014’s-Fallen-King"><a href="#2014’s-Fallen-King" class="headerlink" title="2014’s Fallen King"></a>2014’s Fallen King</h2><p>In 2014, Lords of the Fallen was released. On the Wikipedia page for “Soulslike” (Souls-like games), it ranks at the top of all games except for a few titles produced by FromSoftware itself.</p><p>However, because of the game’s partial imitation of the Dark Souls series and the production team’s obsessive pursuit of difficulty, this game, which was the first to start the era of Souls-like games, also opened the magic box of “Souls-like games are full of feces”.</p><p><img src="https://s2.loli.net/2023/10/30/MBrNHRCPXKIqaAG.png" alt="image.png"></p><p>Nine years later, a newly reorganized production team, HEXWORKS, brought out a brand new Fallen Lords. The game’s Chinese name is differentiated by only one point, while the English name is the exact same “Lords of the Fallen”, once again abandoning its poorly received predecessor in favor of the same name, a clear sign of their determination to make a clean break for this IP.</p><p>But whether it’s the content of the game where the protagonist saves the world along the way, or the mixed reviews from players outside the game, the road to the rebirth of Lords of the Fallen still faces many challenges.</p><h2 id="HEXWORKS-brought-out-a-brand-new-Fallen-Lords"><a href="#HEXWORKS-brought-out-a-brand-new-Fallen-Lords" class="headerlink" title="HEXWORKS, brought out a brand new Fallen Lords"></a>HEXWORKS, brought out a brand new Fallen Lords</h2><p>Prior to its release, gamers had high expectations for Lords of the Fallen because its goodness was visible to players through the previews.</p><p>The map-based framework of the two worlds of the table is undoubtedly the most central setting of this game and the biggest feature of the game. This setting is not unique to Lords of the Fallen, except that the enhancement of the game’s explorability as well as the harshness of the map’s design is entirely conceivable when it comes to combining it with a four-way 3D Galactic City-esque map, and placing it in a crisis-ridden atmosphere of exploration.</p><p>In this regard, Lords of the Fallen’s map designers can safely and proudly say that they’ve done it all.</p><p><img src="https://s2.loli.net/2023/10/30/v86LHyui9MbtRK5.png" alt="image.png"></p><p>The maps for each level in the game are still the usual Souls-esque boxy design, with paths, trails and shortcuts stringing a level together, allowing the player to gradually expand the safe zone towards the boss room as they explore. Only on the scale of the game as a whole, instead of the conservative level-by-level boxed-in design that most games of its kind would reference, Lords of the Fallen focuses on the full-map connectivity that players remember most from the first generation of Dark Souls.</p><p>What’s wonderful about this type of map is that it doesn’t just connect the entrances and exits of each map one after the other, but allows two neighboring maps to connect and form a whole with more or less shortcuts.</p><p>When the player has not yet finished exploring this map, they will find the name of the larger map floating in the screen as they walk, unknowingly intruding on another map.</p><p>Perhaps the new shortcut opened is not commonly used, but it opens a door that “can’t be opened from this side”, and the self-satisfaction of saying “Oh, so this is where it leads” is the splash of the fulfillment of the moment’s exploration as it gushes out.</p><p><img src="https://s2.loli.net/2023/10/30/NMzH5ac816yZIPr.png" alt="image.png"></p><p>On top of that, the dual-world setting adds another dimension to the already excellent map design.</p><p>Although the setting is two worlds, it doesn’t allow for the scale of exploration to be crudely multiplied by two; in general, there’s more to the inner “shadow world” than the surface world, and a platform that was previously invisible, a ladder that wasn’t in the surface world, or a river that was previously underwater can become a necessary pathway for the player to continue onward. The path that must be traveled.</p><p>The Shadow Realm can be thought of as a large “hidden wall” area that can only be seen with props in key places. Its contribution to the explorable scale is very limited, but it adds a lot of fun to the exploration of the map in terms of connecting mechanisms, hiding shortcuts, and hiding props.</p><p>This, coupled with Shadow Realm’s unique hell-painted landscapes, runs through all of the maps with very different styles, and unifies each mini-map’s painting style from the landscape. Although this also, to a certain extent, reduces the recognizability of the map, the twofold map landscape also brings a lot of inconvenience to the player to recognize the way. From the point of view of exploration design alone, Lords of the Fallen can be ranked even in FromSoftware’s own works.</p><p>But Lords of the Fallen isn’t just a harmless tourist simulator. The aggressive design language is evident in their map design, but also in the combat that more players care about.</p><h2 id="The-first-boss-of-Lords-of-the-Fallen"><a href="#The-first-boss-of-Lords-of-the-Fallen" class="headerlink" title="The first boss of Lords of the Fallen"></a>The first boss of Lords of the Fallen</h2><p>The system carefully teaches the player a wide range of operations in the first ten minutes of the game, but only a few of them will be used regularly during the game’s progression.</p><p>When I first encountered the first boss of Lords of the Fallen, I tried two common melee playstyles in Souls games, the bounce-back and the roll, only to be met with two diametrically opposed views of difficulty.</p><p>The game’s shields are varied, but most of them have less than 50% physical defense, meaning you’ll still take more than 50% damage if you defend with a shield. It’s just that this damage is represented as Void Blood, which can be knocked out by attacking the enemy after blocking, and then lost when attacked again, similar to the blood bar mechanic in Curse of the Bloodborne.</p><p>The problem, however, is that a bounce back in Lords of the Fallen, even if it’s successful, still deducts the same amount of Void Blood as a block. Whereas enemy soldiers in the game have a visible frame slot, a successful bounce back does not cause a hard hit on the enemy nor can it interrupt its combo until it pops the empty frame slot, and the execution damage after popping a hard hit isn’t very high either. This results in a very low payoff for the risky move of bouncing back, and the risk of being cut down in one hit after bouncing an opponent’s set of combos to empty the entire Void Blood.</p><h2 id="In-contrast-there-is-the-game’s-overused-evasion"><a href="#In-contrast-there-is-the-game’s-overused-evasion" class="headerlink" title="In contrast, there is the game’s overused evasion."></a>In contrast, there is the game’s overused evasion.</h2><p>Lords of the Fallen categorizes evasive maneuvers into light, medium, and heavy based on the weight system of the equipment, but even the medium-weighted evasive maneuvers I used still outclassed all similar games I’ve experienced.</p><p>The extra-long evasive distance, the extremely short back-and-forth rocking motion, the continuous-press second dodging that you can use in the newbie tutorial, and the generous invincibility frame time all made it seem as if I’d entered a different game after I’d just tried out the bullet counter. The protagonist’s roll performance is so over-modeled that there are times when it feels like you’ve made a mistake in the timing of your button presses, but you still manage to dodge through it without any surprises.</p><p>In addition, the monsters in the game are designed to be quite “scientific”, there is no extremely counter-intuitive moves, and the frequency of the enemy soldiers’ fast and slow knives is set to be quite restrained, so there are fewer battles of wits with the computer, and a single roll can cope with the majority of the enemy’s moves, so that the challenge of the game’s bosses is not that difficult.</p><p><img src="https://github.com/zizhuspot/gaming.varygames.com/assets/134364698/13cc764b-da5c-4e7d-927a-2d96fcf09326" alt="image"></p><p>Large bosses with hard-to-see attack fronts are much harder in comparison.</p><p>Perhaps in an effort to maintain the high difficulty standards of Souls-like games, the production team gave limited challenge to the bosses and put the difficulty in the small groups of monsters on the path of exploration. The stacking of monsters in the game is the most criticized issue in Lords of the Fallen after its release.</p><p>Early on in the game, due to the limited variety of monsters in my opinion this issue wasn’t really highlighted, and with my years of experience with Souls-like games it still smoothed out slowly and efficiently. It was only in the mid-game that the intensity of the smaller monsters began to rise, no longer just a simple slash or two, while the number of monsters faced skyrocketed.</p><p>Coupled with the mid-game monsters also began to become more varied, fire-breathing, jumping cleave, wheel windmill, the face of different moves need to deal with more complex, and often fight to fight to take the road and escape.</p><p>At this time, the number of long-range monsters straight up also reflected, no matter where you escape to there are always a few shots of fireballs from nowhere to chase over. If the player is killed and enters the Shadow Realm, there will be more Shadow Realm monsters waiting for us, as well as the sneak attack of the blue fireballs.</p><p>It was at this moment that I finally realized the deep understanding of the production team, who claimed to be “grandsons of Miyazaki Hidetaka”, of the famous suffering sanctuary of the “Twin Bows of the King’s Castle”.</p><p>With such an intense array of monsters, the most important thing a panicked player would want to see is a life-saving archive point. The reason why players complained about the game’s severe stacking of monsters may be precisely because the game has too few effective archive points.</p><p>The reason why I say effective save points is because there are two types of save points in the game. One type has a fixed location, just like in other games, while the other is a temporary save point, which the player can use consumable items in a specific area to summon a save point. There are a lot of these specific areas, you can see one every few minutes, but the problem is that it can only exist at the same time, the old point will disappear when the new point is summoned, but the number of consumable props is extremely limited.</p><p>With this setup, the distance between many of the archive points becomes too long. Players will often run out of supplies halfway through a patient push, and then run out of food to die impatiently, and then repeat the previous action with an anxious mood, until they lose patience with a series of monsters parkour finally died in vain in the hustle and bustle of artillery fire, and naturally, they will be impressed by the endless monsters behind them.</p><p>So did the production team really not take into account the stacking of monsters in the game? I’m sure there’s no way they didn’t notice. It’s just that their vision of the game and gameplay was most likely skewed, and that’s why players are now wailing.</p><h2 id="Beyond-the-normal-class"><a href="#Beyond-the-normal-class" class="headerlink" title="Beyond the normal class"></a>Beyond the normal class</h2><p>After going through the one-week process, one can see that Lords of the Fallen goes far beyond the normal class of Souls’ very strong RPG attributes and characteristics of a Diablo-type treasure swiping game.</p><p>Whether it is the game’s rich variety of weapons, each monster will drop a full set of clothing, or each piece of clothing can be modulated by their own color, all make the game’s “warm” sense. Weapons can all be embedded with gems of different characteristics, and with pendants, rings, lanterns, long-range weapons and props, there are many ways to play with the performance of the protagonist, which makes the protagonist’s BUILD possibilities in theory.</p><p><img src="https://s2.loli.net/2023/10/30/iqCxNvRojwcX3EG.png" alt="image.png"></p><p>In terms of projectiles alone, there are a variety of spells, physicals, and aids that are augmented by the panel.</p><p>During my personal playthrough, I actually found a solution to the stacked monster problem. In a map dominated by fire-attribute monsters, there was a huge sword hidden that could make every attack come with a Ring of Fire attack, and when the monsters gathered, using it to clear the monsters would be a lot more efficient than a normal force-sensitive huge sword. But I ended up not using it.</p><p>On the one hand, it’s because in this map, its extra fire attribute attack power doesn’t work well, and it’s not versatile in other usage scenarios, so the advantage doesn’t come into play against a single enemy. On the other hand, it’s because mid-game point-washing props are still a luxury item, and for players who are still struggling to make ends meet, they just can’t pull out a specific BUILD for every situation and target their battles.</p><p>Just because a solution exists to a problem doesn’t mean it doesn’t exist. I think the production team in the face of the problem of stacking monsters, must be the late game can frequently change the occupation, or online with more occupations with the situation taken into account, more to the RPG perspective to look at is still a class of souls game of the one-week experience, so did not go to modify these problems.</p><p><img src="https://s2.loli.net/2023/10/30/nTQof9dlRFGvcW3.png" alt="image.png"></p><p>Carrying monster piles hard through blood-returning spells and blood-returning casts is also a solution, but it’s too much of an extravagance</p><p>The production team’s RPG-like mindset can also be seen in the distribution of many of the monsters and the move set.</p><p>Many of the game’s bosses, while having patterned moves, have a lot of wide-range AOEs and a lot of ranged attacks. While their moves are easy to dodge, many attack opportunities often need to be sacrificed for safety. The pace of the battle is often interrupted, and it always feels like this isn’t a boss designed for 1 player.Compared to a slow action game like Dark Souls, Lord of the Fallen’s boss design has some shades of the multiplayer bosses in MMOs instead.</p><p>From the game at first glance extremely stunning table dual world map design, can see this reorganization of the newborn production team want to do a good job “Lord of the Fallen” ambition, want to make in the class souls based on the high pursuit of their own new things. But perhaps it’s because of their radical design direction that the final product is misaligned with players’ comfort zones and high expectations.</p><p>The production team did not choose to use the safest way, towards the direction of the “Souls canned food” to set the formula to repeat the already proven to be the most secure path of the Souls game, but to try their own way. This spirit deserves to be recognized, but of course the risk of widening the gap between strengths and weaknesses is a risk they need to take on their own.</p><p><img src="https://s2.loli.net/2023/10/30/OG35JufUIlKqisQ.png" alt="image.png"></p><p>It’s just that, unlike nine years ago or the Purgatory in the game’s background, this time around, gamers are expecting more from Lords of the Fallen, and naturally, there will be more disappointment. The game currently has a poor reputation, but it’s not without redeeming qualities, and gamers who are willing to listen are always welcome.</p><p>Just a few days into the game’s release, the game released five patches in a row, which contained a large number of gameplay tweaks in addition to some hardware-related performance optimizations. Many of these tweaks were preceded by the phrase “after listening to the community’s feedback” - it seems that the production team didn’t stick to their stubbornness, but rather intended to take a low profile in the face of today’s word-of-mouth problems.</p><p>The game’s release isn’t the end of Lords of the Fallen’s rebirth, which is clearly still ongoing.</p>]]></content>
    
    
    <summary type="html">In 2014, Lords of the Fallen was released. On the Wikipedia page for &quot;Soulslike&quot; (Souls-like games), it ranks at the top of all games except for a few titles produced by FromSoftware itself.</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    
    <category term="The Lord of the Fallen" scheme="https://www.nablepart.com/tags/The-Lord-of-the-Fallen/"/>
    
    <category term="Soulslike" scheme="https://www.nablepart.com/tags/Soulslike/"/>
    
    <category term="Fallen King" scheme="https://www.nablepart.com/tags/Fallen-King/"/>
    
  </entry>
  
  <entry>
    <title>Binding a personal blog on Github to a Godaddy domain name</title>
    <link href="https://www.nablepart.com/fe2651968d1a/"/>
    <id>https://www.nablepart.com/fe2651968d1a/</id>
    <published>2023-09-27T04:03:40.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="I-Purchase-a-domain-name"><a href="#I-Purchase-a-domain-name" class="headerlink" title="I. Purchase a domain name"></a>I. Purchase a domain name</h2><p>Just go to the Godaddy website <a href="https://sg.godaddy.com/zh/">sg.godaddy.com&#x2F;zh&#x2F;</a> and buy it yourself.</p><h2 id="II-Configuring-Github"><a href="#II-Configuring-Github" class="headerlink" title="II. Configuring Github"></a>II. Configuring Github</h2><h3 id="1、New-CNAME-file"><a href="#1、New-CNAME-file" class="headerlink" title="1、New CNAME file"></a>1、New CNAME file</h3><p>Create a new CNAME file in the sources directory of our Hexo project with our domain name in it.</p><figure class="highlight txt"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">xxxx.com</span><br></pre></td></tr></table></figure><p>Redeploy the project afterward:</p><figure class="highlight txt"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">hexo g</span><br><span class="line">hexo d</span><br></pre></td></tr></table></figure><blockquote><p>If you are using hexo framework to build a blog and deploy it to Github Pages.<br>Every time hexo g hexo d will push everything in the public folder of your blog directory to the Github Pages repository and overwrite the CNAME file. To solve this problem, <strong>you can directly add the CNAME file to the source folder</strong>, so that you don’t have to worry about overwriting the CNAME file in the repository every time you push it. This way, you don’t have to worry about the CNAME file being overwritten in the repository every time you push.</p></blockquote><p>After that we can see this file in the root directory of the Github project on our website.</p><blockquote><p>There is another way to do this: on your website’s Github project, click SettingsSettings, find Custom domain, fill in the domain name you applied for, and save it. You will also see this file in the root of your Github project, but when you deploy the project each time, the CNAME file will disappear, which is essentially the same as if you had created a new CNAME and placed it in the root of your local Github project, instead of in the source folder.</p></blockquote><h3 id="2-Add-records-in-DNS"><a href="#2-Add-records-in-DNS" class="headerlink" title="2. Add records in DNS"></a>2. Add records in DNS</h3><p>Add 3 records to your DNS configuration (at the domain name resolution provider, using dnspod as an example below)</p><table><thead><tr><th>Host</th><th>Record types</th><th>Points to</th></tr></thead><tbody><tr><td>@</td><td>A</td><td>192.168.1.1</td></tr><tr><td>www</td><td>CNAME</td><td>username.github.io</td></tr></tbody></table><blockquote><p>So people can access your website with or without www (in fact, the way of www, it will be resolved to xxxx.github.io first, and then according to the CNAME to xxx.com, that is, in the middle of the conversion is once).<br>Above, we use the CNAME alias record, some people also use the A record, the value of the record is to write the github page inside the ip address, but sometimes the IP address will change, resulting in the final resolution is incorrect, so it is still recommended to use the CNAME alias record is better, it is not recommended to use the IP.<br>Such as:<br>(1) first add a CNAME, the host record write @, after the record value write your xxxx.github.io<br>(2) add another CNAME, the host record write www, followed by the record value is also username.github.io, replace username with your own Github username.</p></blockquote><h3 id="3、Modify-DNS-address-in-GoDaddy"><a href="#3、Modify-DNS-address-in-GoDaddy" class="headerlink" title="3、Modify DNS address in GoDaddy"></a>3、Modify DNS address in GoDaddy</h3><p>(1) In the My Account drop-down menu in the upper right corner, click -&gt; My Products:</p><p>(2) Click the DNS button after the domain name:</p><p>(3) Change the domain name servers to:</p><figure class="highlight txt"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">f1g1ns1.dnspod.net </span><br><span class="line">f1g1ns2.dnspod.net</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(4) Wait for your DNS configuration to take effect:</p><p>Configuration of DNS does not take effect immediately, go back to your domain name after 1 minute to see if the configuration has been successful.</p><h2 id="III-References"><a href="#III-References" class="headerlink" title="III. References"></a>III. References</h2><ul><li><a href="https://www.zhihu.com/question/31377141">How to bind your own domain name to github</a></li><li><a href="http://www.jianshu.com/p/05289a4bc8b2">How to Build a Standalone Blog - A Concise Github Pages &amp; Hexo Tutorial</a></li><li><a href="http://www.cnblogs.com/openxxs/p/5950598.html?utm_source=itdadao&utm_medium=referral">Building a Static Personal Blog via GitHub and GoDaddy</a></li></ul>]]></content>
    
    
    <summary type="html">How to bind a personal blog on Github to a Godaddy domain name</summary>
    
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/categories/Domain-name/"/>
    
    
    <category term="Domain name" scheme="https://www.nablepart.com/tags/Domain-name/"/>
    
    <category term="Godaddy" scheme="https://www.nablepart.com/tags/Godaddy/"/>
    
    <category term="Github" scheme="https://www.nablepart.com/tags/Github/"/>
    
    <category term="CNAME" scheme="https://www.nablepart.com/tags/CNAME/"/>
    
    <category term="DNS" scheme="https://www.nablepart.com/tags/DNS/"/>
    
  </entry>
  
  <entry>
    <title>Harvest Day 3 review:Hardcore &quot;zero dollar purchase&quot; simulation in a high-tech painting style</title>
    <link href="https://www.nablepart.com/8d5d186deed3/"/>
    <id>https://www.nablepart.com/8d5d186deed3/</id>
    <published>2023-09-24T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/30/AwOhlKqr8a7JYHN.png" alt="image.png"></p><p><strong>If matching is not free, cooperation is meaningless.</strong></p><p>September 21st. “Robbery simulator” “Harvest Day 3” on Steam platform officially released. The “Harvest Day” gang, the most powerful criminal gang in the previous game, had saved up enough money for their pensions and were ready to quit, but they were framed by unknown forces, their assets were wiped out, and they were forced to regroup and come back.</p><p>After ransacking the small bank in the Beta, they go on to commit seven more major crimes in New York City, making a lot of money while getting revenge on the people behind the crimes.</p><p>We explained the general gameplay mechanics in our previous Harvest Day 3 beta experience article, and given that they’ve largely been carried over to the official version, this article focuses more on showing the overall experience of the official version with the new heists.</p><p><img src="https://s2.loli.net/2023/10/30/2MKZGm9SOCAs7w8.png" alt="image.png"></p><p>“High-tech criminality is a common theme in all seven heists, with both the security team and members of the Harvest Day gang utilizing modern technology and equipment.</p><p><img src="https://s2.loli.net/2023/10/30/tZj9bfdnC5yF3Ko.png" alt="image.png"></p><h2 id="The-modern-norm-swiping-your-cell-phone-into-oblivion"><a href="#The-modern-norm-swiping-your-cell-phone-into-oblivion" class="headerlink" title="The modern norm: swiping your cell phone into oblivion"></a>The modern norm: swiping your cell phone into oblivion</h2><p>Sneaking through doors and picking locks is a must for jungle thieves, but electronic locks are usually impossible to pick, swiping cards, swiping faces, swiping eyeballs, and asking for passwords, just to name a few. There are also locks that want QR codes, which require the player to make a copy from the employee’s cell phone. Unlike physical key cards, QR codes are able to be shared among team members in real time, which is obviously a security design, but it’s a flat-out convenience for robbers.</p><p><img src="https://s2.loli.net/2023/10/30/JvKIbUijcmA6P4n.png" alt="image.png"></p><p>The back doors of the trucks are more securely locked, but the Harvest Day gang is just as good at it. They paralyzed the truck with an electromagnetic pulse device (EMP), then used a hacking device to transfer the vehicle, and finally drilled under the chassis and removed the cables controlling the door locks - the whole process was seamless and seamless.</p><p><img src="https://s2.loli.net/2023/10/30/2DaWlgodCxrfQ6X.png" alt="image.png"></p><h2 id="The-moment-of-the-EMP-explosion"><a href="#The-moment-of-the-EMP-explosion" class="headerlink" title="The moment of the EMP explosion"></a>The moment of the EMP explosion</h2><p>The level of technology has advanced, and what the gangs are trying to rob has evolved with the times. The main story of Harvest Day 3 can’t be separated from the two lines of “data” and “cryptocurrency”, and the robbers didn’t miss out on stealing hard disks and servers containing important information.</p><p><img src="https://s2.loli.net/2023/10/30/cKvYPld5t1F8kfb.png" alt="image.png"></p><p><img src="https://s2.loli.net/2023/10/30/jGJPu21NWE57mXI.png" alt="image.png"></p><p>High-tech designs such as these not only fit the 2023 setting and give Harvest Day 3 a unique look, but also serve the gameplay.</p><p>The combination of mechanical and electronic locks is the complete leveling system of Harvest Day 3. Newly arrived gang members will basically be able to properly deal with non-surprising situations during an infiltration simply by learning how to handle each type of lock.</p><p>High-tech security is also asking players to have fewer accidents. Once an alarm is triggered by a failed infiltration, the electronic locks lock up completely, and the gangsters have to use traditional crafts to force their way through the physical blockade, such as aluminum thermite or drills. Cryptocurrency wallets will also self-destruct when the alarm is triggered, destroying the bandits’ American Dream.</p><p>In theory, accidents can be avoided. Of the eight heists in the current version of Harvest Day 3, all of them allow players to infiltrate solo and take most (or sometimes all) of the map’s loot, except for the second case, the “Road Rage” hijacking of a truck, which can only be attacked by force.</p><p>The option to vote to restart the level in this game does not change the randomized layout of the level, and relying on the backplates to achieve the perfect infiltration is just a matter of time and patience.</p><p>One more reliable teammate can even reduce the chance of accidents. In the official version of the game, the design of setting up three security levels for specific areas is still glowing, “wearing a mask or not” is always a question worth pondering.</p><p>Players who don’t wear masks are ignored by civilians in most areas, and even if they’re running around with loot on their backs, only the guards in private areas will protest, and then only by politely sending the player back to a public area. As a price, they are unable to jump, which prevents them from going over obstacles and drilling ventilation ducts, limiting their infiltration paths considerably.</p><p><img src="https://s2.loli.net/2023/10/30/uapyihqGxklTm4S.png" alt="image.png"></p><h2 id="Guards-aren’t-interested-in-the-contents-of-the-player’s-backpack"><a href="#Guards-aren’t-interested-in-the-contents-of-the-player’s-backpack" class="headerlink" title="Guards aren’t interested in the contents of the player’s backpack"></a>Guards aren’t interested in the contents of the player’s backpack</h2><p>Once the mask is on, it’s impossible to take it off and weapons must be drawn, and while being able to take unusual routes and use weapons to control crowds and kill security guards, they are also more likely to set off alarms, at which point polite guards will pull out their guns and shoot, inducing a bloodbath against law enforcement.</p><p><img src="https://s2.loli.net/2023/10/30/S9ZGOuqVWgBRlKv.png" alt="image.png"></p><p>Ventilation ducts make excellent infiltration paths</p><p>But you have to wear a mask to get in.</p><p>The most ubiquitous infiltration strategy, obviously, is to split up into two groups, one wearing masks to deal with security and the other unmasked to transport the loot. The AI teammates in the infiltration are just as bad as in Generation 2, and still won’t actively pick up loot packages, so the only way to use the above strategy is to find real teammates to work with. Regardless of how hard it is to find reliable teammates, just the right amount of assistance can double the fun of an infiltration.</p><p>Even if something goes wrong, it doesn’t mean the game is completely over, and raids are an integral part of Harvest Day’s gameplay. Harvest Day 3’s strong attack can’t be as cutthroat and open peerless as the 2nd generation’s. The player’s resource supply and weapon power are limited, and team members need to look out for each other in order to retreat from the police’s man-to-man tactics.</p><p>However, with the new engine, the player’s running and jumping and sliding and shoveling and other movements are completely unrestricted, and the feel of the weapons has been optimized, plus there is a new mechanism of holding hostages as a meat shield, the raid experience is very good.</p><p><img src="https://s2.loli.net/2023/10/30/TpzaVbcEek3w6lF.png" alt="image.png"></p><p>Shotgun’s unique flesh and blood effect.</p><p><img src="https://s2.loli.net/2023/10/30/xDecKNZtr4Ms3wj.png" alt="image.png"></p><p>Taking hostages as a meat shield is a tried-and-true tactic.</p><p>There’s also the “Gunslinger” weapon, which is similar to a big move, and you can call in airdrops to get it when you’ve built up your rage bar. The grenade launcher, which appeared in the beta, has been enhanced in the official version to clear the field. When you reach a certain level, you can also unlock an anti-material sniper rifle with smart sights, which is a powerful weapon that can be used against heavily armored soldiers.</p><p><img src="https://s2.loli.net/2023/10/30/g5SxPRLJfphe4mo.png" alt="image.png"></p><h2 id="There-are-frames-you-don’t-fight"><a href="#There-are-frames-you-don’t-fight" class="headerlink" title="There are frames you don’t fight?"></a>There are frames you don’t fight?</h2><p>The growth mechanic outside of the heist also encourages players to head for Plan B without looking back. Much different from the Gen 2 idea of leveling up by infiltrating and accumulating capital, in Gen 3, it takes a lot of high-profile actions to unlock enough low-profile capital.</p><p>The leveling system has nothing to do with how much the player earns from heists, and only has to do with how many built-in challenges the player does, a large portion of which require the player to clean up the bars with a particular weapon, which can only be accomplished in Stronghold.</p><p><img src="https://s2.loli.net/2023/10/30/6cyk12AHPoBiU95.png" alt="image.png"></p><p>Furthermore, a gunsmithing system was introduced, which required the consistent use of weapons and increased proficiency in heists (as opposed to the random card draw of Generation 2) in order to unlock the corresponding accessories, including the most important accessory for infiltration, the silencer. The weapon balance issue in the beta was once criticized, and it was only when it was put into the official version that it was realized that it was intentional, and that higher level weapons were just going to be stronger than lower level weapons. However, with the design of challenges and unlockable accessories, weaker weapons also have a chance to appear.</p><p>Such a growth mechanism has a certain degree of hepaticity, and the degree of acceptance is a matter of opinion. Road Rage”, which can only be cleared by raiding, has become the easiest heist to match with passersby and successfully clear the level. The whole process is simple and brutal, with a lot of bars, fast clearance, and fast money and experience. And hell, since there’s no shooting range in the current version of Harvest Day 3, Road Rage is also a great place to test your firearms.</p><p>After about 20-30 hours of “fine tuning,” players will have unlocked enough weaponry and skills to start building their loadouts, and the game will have just begun.</p><p>Harvest Day 3’s skill tree is a cyclical system, where players need to acquire three basic buffs, “Cool,” “Valor,” and “Excitement,” through a specific gameplay method, and then use other skills to consume these three basic buffs. The player needs to acquire the three basic buffs “Coolness”, “Valor” and “Excitement” through a specific gameplay method, and then use other skills to consume these three basic buffs, converting them into powerful buffs or resources that are beneficial to the battle. This skill tree is not as complex as the 2nd generation’s, but it’s also absolutely magical, so you can greatly improve the experience of infiltrating and attacking by pointing out a few core skills.</p><p>For infiltration, the Infiltrator and Fraudulent skill trees are useful, allowing you to quickly pick locks by approaching passersby or guards for a brief moment of arousal, or steal in broad daylight without raising any alarms.</p><p><img src="https://s2.loli.net/2023/10/30/gwQLzHF5q471MZo.png" alt="image.png"></p><p>Take it, take it all</p><p>Most skill sets are better suited for strong attack play. For example, the first skill in the “Harvester” line requires only 35 consecutive splashes of water to attach a chill status. The subsequent skill in this line, “Ammo Funnel,” allows the player to auto-load the magazines with ammo dropped from the bars, creating uninterrupted firepower output.</p><p>For the hardcore raider, there’s also the “Engineer” line, which lets the automated turret do the dirty work for you instead.</p><p><img src="https://s2.loli.net/2023/10/30/v3M5ijESUu6Crky.png" alt="image.png"><br>Escorting a transport target</p><p>A strong loadout will help players tackle higher level heists with dignity. There are significant differences in the infiltration experience on higher difficulties, not only in the number of guards and surveillance cameras, but also in the map design. For example, the Art Museum heist will have a higher infrared density, and the nightclub will change from being open for business to a private party where no one is allowed in.</p><p><img src="https://s2.loli.net/2023/10/30/AP4y6RQkerF1KNz.png" alt="image.png"></p><h2 id="On-the-highest-difficulty-the-floor-is-hot-to-the-feet"><a href="#On-the-highest-difficulty-the-floor-is-hot-to-the-feet" class="headerlink" title="On the highest difficulty, the floor is hot to the feet"></a>On the highest difficulty, the floor is hot to the feet</h2><p>In the case of the high difficulty Strong Attack, it’s not just the damage inflicted by enemies that’s elevated, either, but also the chances of an unlucky rig malfunctioning. Even with the lower difficulties memorized, the high difficulty heists require a bit of re-acclimation, boosting the playability of the later stages of the game.</p><p>Overall, the official version of Harvest Day 3 goes back to basics, aiming to restore a more hardcore crime experience that is more akin to the unpretentious Generation 1 than the magical Generation 2. The game’s infiltration and strong-assault gameplay each have their own highlights, and some of the more modernized mechanics add to the gameplay.</p><p>Although the current version of the game heist content is on the low side, the main story and more complex gameplay mechanics are not fully rolled out, the official version of the beta version of the conclusions reached in the beta can be fully applied to the official version: Harvest Day 3 is more like a game in 2023.</p><p>According to data from the SteamDB website, on the day of Harvest Day 3’s release, 77,900 people were online and ready to commit crimes at the same time on Steam alone, which is a promising result.</p><p>– But two days later, that number began to drop, and so did the positive reviews on Steam. There was no other reason: the servers were having problems.</p><p><img src="https://s2.loli.net/2023/10/30/phlotUCHYLdEPwz.png" alt="image.png"></p><p>Compared to the beta version, the game’s frame rate has improved somewhat, perhaps as a result of the production team’s determination to remove the D-encryption before the release. But the constant internet connection to get into the game hasn’t changed, and the server jerking problem is still there. There’s still no offline or single-player mode in the official version, and setting the room permissions to friends-only has to be matched for a while.</p><p>Due to the lack of Quick Match and Generation 2 Crime Network features, players are limited to specific heists and difficulties when matching, and with 8 heists and 4 difficulties, there are 32 options to divert players, and the amount of time it takes to get into the game has been lengthened.</p><p>Server and matchmaking issues are the highest priority for the production team to address over the next few days, prioritizing even the four content updates that are expected to take place over the course of the year. Harvest Day 3 has hit the ground running as a hardcore crime simulation, and filling it with content won’t be hard. Whether it’s adding an offline mode, switching back to the P2P connections of its predecessor, or buying two more servers, it’s expected to win back gamers.</p>]]></content>
    
    
    <summary type="html">Harvest Day 3 review:Hardcore &quot;zero dollar purchase&quot; simulation in a high-tech painting style</summary>
    
    
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    
    <category term="Steam" scheme="https://www.nablepart.com/tags/Steam/"/>
    
    <category term="Harvest Day" scheme="https://www.nablepart.com/tags/Harvest-Day/"/>
    
  </entry>
  
  <entry>
    <title>What will the world look like in the next 50 years?</title>
    <link href="https://www.nablepart.com/4c7956c456ed/"/>
    <id>https://www.nablepart.com/4c7956c456ed/</id>
    <published>2023-09-19T10:27:21.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<p>Kevin Kelly, author of Out of Control and founding editor-in-chief of Wired magazine, believes that from the next 5,000 days to the next 50 years, AI will be the world’s most important keyword, becoming as ubiquitous an infrastructure as the Internet.</p><p>And OpenAI, the most successful AI company in his mind, “a great case of disruptor.”</p><p><img src="https://s2.loli.net/2023/09/19/NjZ2d7YOw4amXWL.jpg"></p><p>Since ChatGPT’s hasty launch late last year and unexpected explosion in popularity, the company and his founding team have been propelled to center stage by the technology frenzy sweeping the globe.</p><p>Where did OpenAI come from and where is it going? In a recent lengthy article in Wired magazine, renowned tech journalist Steven Levy offers an in-depth discussion of OpenAI’s growth history and the company’s vision.</p><h2 id="Altman’s-Choice"><a href="#Altman’s-Choice" class="headerlink" title="Altman’s Choice"></a>Altman’s Choice</h2><p>Before leading OpenAI, Sam Altman had become CEO of Y Combinator, the world’s most famous tech incubator, and the point of cashing in on all those unicorns wasn’t to fill the wallets of his partners, but to fund change at the species level.</p><p>He’s set up a research arm in the hopes of funding ambitious projects to solve the world’s biggest problems. But to him, AI is the area of innovation that will disrupt everything: a superintelligence that can solve human problems better than humans can.</p><p>He even had thoughts of running for governor of California. But he realized that he was perfectly capable of something bigger - leading a company that would transform humanity itself.</p><h2 id="At-a-dinner-in-California-he-and-Musk-hit-it-off"><a href="#At-a-dinner-in-California-he-and-Musk-hit-it-off" class="headerlink" title="At a dinner in California, he and Musk hit it off."></a>At a dinner in California, he and Musk hit it off.</h2><p>Musk was arguing with Google co-founder Larry Page at the time. Because Musk believes that human consciousness has precious uniqueness, but Page believes that machines and people are equal, if the machine really has consciousness, eliminate human beings, that is the law of natural evolution, Page also accused Musk of being a “speciesist”.</p><p>So, Musk is determined to spend some money, for the “human team” to make more efforts.</p><p>Altman, who cares about both technological change and AI safety, naturally became his ideal partner.</p><p>Altman’s answer to the question of what attracts top talent to a brand new AI research organization is the crazy vision of AGI.</p><p>AGI is the so-called General Artificial Intelligence, that is, AI that can do complex tasks like human beings. in the days when Altman was still the CEO of YC, computers were already able to do amazing feats through deep learning and neural networks, such as labeling photos, translating text, and optimizing complex ad networks.</p><p>These advances convinced him that AGI was truly within reach for the first time. However, putting it in the hands of large companies worried him. He believed that these companies would be too focused on their own products to seize the opportunity to develop AGI as quickly as possible, and, if they did create it, they might be reckless enough to make it public without taking the necessary precautions. That’s why there needs to be someone else to keep them in check.</p><p>Altman’s most important principle for screening recruits is that they must be believers in AGI. With his own and Musk’s call to arms, and the tantalizing rhetoric of exploring AGI, Altman scooped up the likes of Stripe CTO Greg Brockman and Google Brain core scientist Ilya Sutskever.</p><h2 id="In-December-2015-OpenAI-was-officially-founded"><a href="#In-December-2015-OpenAI-was-officially-founded" class="headerlink" title="In December 2015, OpenAI was officially founded."></a>In December 2015, OpenAI was officially founded.</h2><p><img src="https://s2.loli.net/2023/09/19/noHTQGMqrIyNXKt.png"></p><p>In 2021, he told reporters:”AGI can only be built once. And there aren’t many people who can run OpenAI well. I’ve been fortunate to have a series of experiences in my life that have really actively prepared me for this.”<br>A period of confusion</p><p>Despite having a crazy, great vision, OpenAI was clueless about how to get there.<br>Altman recalls that when the original small team didn’t have an office and gathered in Brockman’s apartment, his mind kept wondering:”What are we going to do?”</p><p>Things didn’t get much better until more than a year after the company was founded. The company didn’t have a clear direction; it was just trying random things, drilling down to a system that solved video games, spending a lot of energy on robotics, and then sending out a few papers.</p><p><img src="https://s2.loli.net/2023/09/19/qONiECcm7wBrJUR.jpg"></p><p>Altman says when he remembers the scene at the company at the time:”We knew what we wanted to do. We knew why we wanted to do it. But we didn’t know how.”</p><p>But they believed. Their optimism is bolstered by the constant improvement of artificial neural networks using deep learning techniques, and Sutskever says that chasing AI “isn’t completely crazy. It’s just moderately crazy.”</p><p>It wasn’t until 2016 that OpenAI waited for legendary AI researcher Alec Radford, who, after accepting OpenAI’s offer, told his high school magazine that taking on the new position was “a little bit like joining a graduate program” - an open-access program for researching AI. -an open, low-pressure habitat for researching AI.</p><p>Radford, an introverted, low-key researcher, didn’t accept the author’s invitation for a face-to-face interview, but instead wrote a long email describing his work at OpenAI.</p><p>His biggest interest is getting neural networks to have clear conversations with humans. This is a departure from the traditional scripting model for making chatbots, which has been used poorly from the original ELIZA to the popular Siri and Alexa. He writes: “Our goal is to see if there is any task, any environment, any domain, anything, where a language model can come in handy.”</p><p>At the time, he explains, language models were seen as novelty toys that could only occasionally generate a meaningful sentence, and only if you really squinted your eyes. His first experiment was to scan 2 billion Reddit comments to train a language model.</p><p>Like many of OpenAI’s early experiments, this one failed. That’s OK. The 23-year-old was given license to move on and fail again, Brockman says: “We were like, Alec’s great, let him do his thing.”</p><h2 id="The-turning-point"><a href="#The-turning-point" class="headerlink" title="The turning point"></a>The turning point</h2><p>In early 2017, an advance copy of a research paper co-authored by eight Google researchers appeared, but it didn’t get much attention. The paper’s official title was “Attention Is All You Need,” but it became known as the “Transformer paper,” named both to reflect the game-changing nature of the idea and to honor the transformation from trucks to giants. It was named so to reflect both the game-changing nature of the idea and to honor the toy that transformed from a truck into a giant robot.</p><p>Transformers enable neural networks to understand and generate language more efficiently. They analyze the corpus in parallel to find out which elements are worth paying attention to. This greatly optimized the process of generating coherent text in response to cues.</p><p>Eventually, it was realized that the same technique could also generate images and even videos. While the paper has since been called the catalyst for the current AI frenzy, at the time, Ilya Sutskever was just one of the few people who understood how powerful this breakthrough was.</p><p>Brockman recalls that Ilya exclaimed in surprise when she saw the Transformer emerge, ‘This is what we’ve been waiting for.’That’s OpenAI’s strategy - work on the problem and then have faith that the team or someone in the field will manage to figure out the missing piece.</p><p>After that, Alec Radford started experimenting with the Transformer structure. He said that at that time he had made more progress in two weeks than he had in the last two years. It became clear to him that the key to getting the most out of the new model was to scale it up - to train it on super-sized datasets. This idea was dubbed “Big Transformer” by his colleague Rewon Child.</p><p>This approach required a change in OpenAI’s previously fragmented, siloed corporate culture, where team resources had to be gathered to focus on a single point of breakthrough, as Quora CEO Adam D’Angelo, who sits on OpenAI’s board of directors, explained to the authors:”To capitalize on Transformer’s strengths, you need to scale it up. You need to run it more like an engineering organization. You can’t have every researcher doing their own thing, training their own models, making elegant things that you can publish. You have to do all this more tedious, less elegant work.”</p><p><img src="https://s2.loli.net/2023/09/19/FLxzb3PNokrY5Xd.png"></p><p>Radford and his collaborators named the model they created “generatively pretrained transformer” - an acronym for GPT. Eventually, the model came to be commonly known as “generative AI. To build the model, they collected 7,000 unpublished books, many of them in the romance, fantasy, and adventure genres, and refined it with thousands of passages from Quora quizzes and middle and high school exams. The model contains 117 million parameters or variables and outperforms all previous models in understanding language and generating answers.</p><p>But the most striking result is that, after processing such a large amount of data, the model is able to deliver results beyond its training, providing expertise in entirely new areas. These unplanned robotic capabilities are known as “zero samples”. They continue to baffle researchers - which is why many in the field are uneasy about these so-called large-scale language models.</p><h2 id="Commercialization"><a href="#Commercialization" class="headerlink" title="Commercialization"></a>Commercialization</h2><p>OpenAI’s pre-funding support basically came from Musk. But in 2018, Tesla began looking into using AI technology for Autopilot, just as OpenAI was already making significant technological breakthroughs.</p><p>Musk had always regarded OpenAI, the company, as his own, so he proposed at the time that it would be better for him to take care of the entire company - directly merging OpenAI into Tesla. But the proposal was flatly rejected by Altman and other executives, so the two sides cut ties, and Musk withdrew his entire investment, announcing at a town hall meeting that he was leaving.</p><p>At the meeting, he predicted that OpenAI would fail, and called at least one of the researchers “stupid”.</p><p>With no revenue coming into the company, Musk’s withdrawal is certainly an existential crisis. While the research OpenAI was doing was some of the hippest AI in Silicon Valley, the fact that it was a non-profit organization certainly limited its attractiveness for funding.</p><p>In March 2019, OpenAI executives came up with a bizarre solution. Create another for-profit entity while remaining a nonprofit. But there was a cap on the revenue of this for-profit division - a number that wasn’t made public, and speculated from the company’s bylaws to be as high as trillions of dollars (OpenAI also believed that if their revenue actually reached that number, they would have certainly made AGIs that they could actually use by then). After that cap is reached, everything the for-profit entity gets goes back to the non-profit labs.</p><p>So, with its new corporate structure, OpenAI managed to bring in a number of venture capital organizations, including Sequoia. But embarrassingly, for OpenAI, billions of dollars in venture capital is a minuscule amount of money, and AI R&amp;D is an exaggerated bottomless pit. The Big Transformer method for creating large language models requires large hardware, and each iteration of the GPT series requires exponentially growing arithmetic power that only a handful of companies can afford.</p><p>So OpenAI quickly locked in on Microsoft, Altman told reporters, and that’s because Microsoft CEO Satya Nadella and CTO Kevin Scott were bold enough: after spending more than 20 years and billions of dollars building a supposedly cutting-edge AI research division, to admit that their work was a mess, and then to bet on a small company that was only a few years old. company.</p><p>Microsoft initially contributed $1 billion in return for computing time on its servers. But the deal grew in size as both sides gained confidence. Now, Microsoft has poured $13 billion into OpenAI.</p><p><img src="https://s2.loli.net/2023/09/19/36WkhFgMRLrOIyK.png" alt="12.png"></p><p>Microsoft has also secured a big payday for itself, not only owning a “non-controlling stake” in OpenAI’s for-profit division - reportedly 49 percent - but also obtaining an exclusive license to commercialize OpenAI’s technology. Moreover, it managed to make its cloud computing platform Azure the exclusive cloud provider for OpenAI. In other words, Microsoft’s huge investment not only secures a powerful partner, but also locks in one of the world’s most popular new customers for its Azure cloud service.</p><p>Furthermore, under the terms of the deal, some of OpenAI’s original ideals - providing equal access for all - appear to have been tossed in the trash.<br>Over the course of the deal, OpenAI gradually took on the nature of a for-profit organization, which turned off some employees and led to the subsequent departure of several executives, who argued that OpenAI had become too commercialized and strayed from its original mission.</p><h2 id="The-Future-of-OpenAI"><a href="#The-Future-of-OpenAI" class="headerlink" title="The Future of OpenAI"></a>The Future of OpenAI</h2><p>As the crazy vision of AGI gets closer to reality, Sam Altman and his team are under increasing pressure to revolutionize every product cycle, meet the business needs of their investors, and stay ahead of the competition. More critically, they were also tasked with preventing AI from wiping out humanity as a “quasi-savior”.<br>OpenAI has changed a lot in the course of time, but the vision of building a secure AGI remains unchanged and is still driving them forward, and the OpenAI leaders are confident that they will create an AI system that is smart and secure enough to bring humanity into an era of unimaginable abundance.</p>]]></content>
    
    
    <summary type="html">Where did OpenAI come from and where is it going? In a recent lengthy article in Wired magazine, renowned tech journalist Steven Levy offers an in-depth discussion of OpenAI&#39;s growth history and the company&#39;s vision.</summary>
    
    
    
    <category term="AI" scheme="https://www.nablepart.com/categories/AI/"/>
    
    
    <category term="ai" scheme="https://www.nablepart.com/tags/ai/"/>
    
    <category term="openai" scheme="https://www.nablepart.com/tags/openai/"/>
    
    <category term="altman" scheme="https://www.nablepart.com/tags/altman/"/>
    
    <category term="chatgpt" scheme="https://www.nablepart.com/tags/chatgpt/"/>
    
  </entry>
  
  <entry>
    <title>Stability AI Makes Bold Move With Stable Audio for AI Audio Generation</title>
    <link href="https://www.nablepart.com/46037a5ade79/"/>
    <id>https://www.nablepart.com/46037a5ade79/</id>
    <published>2023-09-19T01:28:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Introducing-Stable-Audio"><a href="#Introducing-Stable-Audio" class="headerlink" title="Introducing Stable Audio"></a>Introducing Stable Audio</h2><p>Built by Stability AI’s in-house Harmonai audio lab, Stable Audio was trained on a dataset of 800,000 audio clips totaling 19,500 hours licensed from audio partner AudioSparx.</p><p>Like Stable Diffusion, Stable Audio generates audio from natural language prompts specifying genre, tempo, instruments, moods, and other attributes. For example, a user could input “Disco, synthesizer, drums, 120 BPM, orchestral, piano, guitar” to get a matching audio clip.</p><p>In our early audio tests, Stable Audio shows significant quality improvements over previous AI music generators, with less noise and compression artifacts. However, the instrumentation still sounds more haphazard compared to human-composed music.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20230919131828.png"></p><h2 id="Commercial-Release"><a href="#Commercial-Release" class="headerlink" title="Commercial Release"></a>Commercial Release</h2><p>Stability AI has adopted a similar subscription model to Midjourney for Stable Audio. The free tier permits generating 20 audio clips per month (45 sec each), while the $11.99 tier allows 500 clips up to 90 sec that can be used commercially.</p><p>Surprisingly, Stability AI has not open sourced the model, despite its open source ethos. But the company promises that Harmonai will release another audio model trained on different data in the future, sharing the Stable Audio code to allow custom training.</p><p><img src="https://cdn.jsdelivr.net/gh/PirlosM/image@main/20230919131947.png"></p><p>Stability AI also notes that its training methodology avoids choppy audio output by incorporating metadata on clip duration and start times, enabling uninterrupted generation of arbitrary length.</p><h2 id="Applications-in-Gaming"><a href="#Applications-in-Gaming" class="headerlink" title="Applications in Gaming"></a>Applications in Gaming</h2><p>AI-generated music has potential for gaming, as cinematic soundtracks become more crucial. However, most game studios still underinvest in audio. Compared to CGI art teams, audio departments remain small, limiting the financial incentive for AI tools.</p><p>For now, AIGC must also compete against mature commercial audio libraries and economical outsourcing. But with continuous progress, AI could someday produce AAA-quality soundtracks at scale, making it indispensable for game creators.</p><p>Stable Audio demonstrates Stability AI’s growing capabilities in synthesized audio. As models improve, AI music generation may significantly impact many media and entertainment sectors.</p>]]></content>
    
    
    <summary type="html">Stability AI Makes Bold Move With Stable Audio for AI Audio Generation Stability AI, the startup behind the image generator Stable Diffusion, recently made a bold move into audio generation with the launch of Stable Audio. This new tool leverages similar diffusion model techniques to create audio files from textual descriptions.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="Stable Audio" scheme="https://www.nablepart.com/tags/Stable-Audio/"/>
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
  </entry>
  
  <entry>
    <title>Spider-Man 2:Canned vs. Canned on Cloudy Eyeballs</title>
    <link href="https://www.nablepart.com/57ab9f6b84d9/"/>
    <id>https://www.nablepart.com/57ab9f6b84d9/</id>
    <published>2023-09-17T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/30/UKZxw6yiPJndRlB.png" alt="image.png"></p><h2 id="Spider-Man-2-Canned-vs-Canned-on-Cloudy-Eyeballs"><a href="#Spider-Man-2-Canned-vs-Canned-on-Cloudy-Eyeballs" class="headerlink" title="Spider-Man 2: Canned vs. Canned on Cloudy Eyeballs"></a>Spider-Man 2: Canned vs. Canned on Cloudy Eyeballs</h2><p>The initial generation of Spider-Man for PS4 was released in 2018. At that time, gamers were gradually getting tired of “canned games”, and self-publishers and KOLs had learned some terms, like what formulaic open world, checklist …… everyone has no mercy for games full of question marks on the map, it seems that as long as it takes on the It seems that any game with a canned flavor can’t be a good game.</p><p>Spider-Man was one of those games, and I remember an UP user praising Assassin’s Creed and then complaining about Spider-Man, saying that Sony was so disappointing that they had to make a canned game to fool people. While the logic is very confusing, it does reflect some of the climate of opinion at the time.</p><p>It’s true that Spider-Man is a canned game, but there are three or six cans, and Spider-Man is clearly one of the finest. Whether it’s the brilliant use of spider properties to solve the age-old problem of open world running, or the high quality of the combat, scripting and plot performances, even the most uninteresting elements of collecting in the usually formulaic open world are done with the utmost sincerity, and IP enthusiasts will find a lot of heartfelt detail.</p><p><img src="https://s2.loli.net/2023/10/30/V36nPYQKflERZr7.png" alt="image.png"></p><p>But it’s also the fact that the first Spider-Man was already accomplished enough, tapping into all aspects of the IP itself (especially in terms of gameplay), to give the impression that “the new Spider-Man doesn’t know where to start”. It’s already made sprawling and realistic cities, and the gameplay based on Spider-Man’s abilities is pretty much everything imaginable.</p><p>The difference between the two forms of media, games and movies, is that all of Spider-Man’s movies combined in this century are only 20 hours long, but Spider-Man Generation 1 and the subsequent outings can easily contribute over 50 hours of gameplay. In other words, the game consumes the IP at a much faster rate than the movie, and by operating Spider-Man for long periods of time as he echoes through the skyscrapers of Manhattan, the player is much more likely to feel aesthetic fatigue - in fact, this fatigue was already very apparent to me when I played through Generation 1 and then played the DLC again, and it was my biggest concern before I started playing Spider-Man 2 before I started playing Spider-Man 2.</p><p><img src="https://s2.loli.net/2023/10/30/2X698tjyezRUkvi.png" alt="image.png"></p><p>That doubt did come true, especially at the start of Generation 2, when the game routinely opens with a gorgeously cinematic scripted battle. But I didn’t really feel anything inside, other than “again?” I just felt like, “Again? That’s because this opening scene has happened in three consecutive installments. And the hardware optimization for the first hour of the game’s runtime has some kind of major problem, as the graphics don’t show up at all at the level of a PS5-exclusive masterpiece. In the opening battle against the Sandman, the yellow sand in the sky both makes the buildings in the scene look and feel like pieces of paper, and also consumes a huge amount of functionality, but does not show the texture of the sand as it should be.</p><p><img src="https://s2.loli.net/2023/10/30/xMZrWcJlPLi8dsX.png" alt="image.png"></p><p>Insomniac doesn’t seem to be very good at making huge boss fights, and this scene feels like an homage to God of War 3’s Battle of Kronos, but the gigantic Sandman has size, but not much compression!</p><p><img src="https://s2.loli.net/2023/10/30/WMQqUjvnCeJEaA3.png" alt="image.png"></p><p>For a moment I thought it was because the media beta hadn’t yet actualized the final graphics effects, but when the game went live today, the prologue’s graphics were still off the charts</p><p>After the prologue, however, Spider-Man 2’s graphics are back to normal, with 40 fps “fidelity” with optical tracking on, and the city’s architecture, hair, and suit fabrics are in the top tier of the PS5’s. The game’s art resources are still a bit flaky, but the game’s graphics are still a bit flaky. While the art resources are still a bit iffy, with the faces of the villains often being more detailed than the protagonists, the further I play, the better I feel about the game’s overall graphics.</p><p><img src="https://s2.loli.net/2023/10/30/4PXHqK7y6VMskz3.png" alt="image.png"></p><p><img src="https://s2.loli.net/2023/10/30/Tbnr38PuNt1hdfp.png" alt="image.png"></p><h2 id="This-villain-for-example-has-one-of-the-most-detailed-faces-in-the-game"><a href="#This-villain-for-example-has-one-of-the-most-detailed-faces-in-the-game" class="headerlink" title="This villain, for example, has one of the most detailed faces in the game."></a>This villain, for example, has one of the most detailed faces in the game.</h2><p>The gameplay belongs to the series’ collection, and it looks like each of the Spider-Man traits from the first two installments work well, so this generation the team not only added, including various combat and non-combat design elements from the previous installment, but also multiplied - you get to operate two Spider-Men, each of whom has a separate set of skill trees, in addition to a set of shared skill tree, as well as add-on options for suit traits and spiderweb accessories. You have four combat resources alone - there are two slots on each side of the screen to display resource status. This has led to the fact that Spider-Man 2 is the action game with the most key combinations I’ve played since Ghostbusters - not only are there a lot of combinations for combat, with attacks split into long and short presses, but even the usual rushing around involves several key combinations ……</p><p>I envy fans of the series who only started with Spider-Man 2 this year, how happy they must be to be dazzled by the sheer volume of what’s on offer. But for me, having played the previous installment, it no longer felt fresh, whether I was weaving in and out of buildings, infiltrating enemy camps, or mauling hordes of enemies with a variety of signature skills. With the exception of the new Venom feature and the spider web wings, the “very Spider-Man” elements of Generation 2 were a bit of a burden to play.</p><p><img src="https://s2.loli.net/2023/10/30/G4Lgi2NuBz9hpXv.png" alt="image.png"></p><p>Just when I was wondering if I had the motivation to get through the game, here’s where Spider-Man 2 really took me by surprise - the game delivers an almost extravagant quality of storyline that goes beyond what one would expect from an “open-world game”, and is even better than the storyline experience in a lot of linear games. Even better than many linear games.</p><p>The overall pacing and performance level of Spider-Man 2’s main storyline is so high that it’s not an exaggeration to say that it’s the equivalent of putting a copy of Deus Ex 4 into the open world, which is more of a giveaway than anything else. From the moment you first ride a bike with Harry and relive your school adventures, the game pulls off a superbly paced episode with Naughty Dog-level quality of linear scripting, and also takes on the role of a gameplay tutorial - you can see where the homage is going to be paid to this flow, but it gave me a much more mature experience than the one in Mystic Seas 4 when Drake was a kid and his older brother’s adventure was more mature.</p><p><img src="https://s2.loli.net/2023/10/30/dqUpsAlJgvSIMtz.png" alt="image.png"></p><p>There’s far more than one main flow of this caliber in the game, and it’s horribly fleshed out. The amusement park sequence, for example, is so lavish in its materials, rich in its interactions, and subtle in its performances, that it doesn’t have the same flimsy pallor of a formulaic open world, but feels more like a carefully constructed linear game level.</p><p><img src="https://s2.loli.net/2023/10/30/bYkqIpd8MajwAuB.png" alt="image.png"></p><p>Then there’s the part in the middle where you’re treated to one of Ricki and Tinkerbell’s proudly PS5 hard-drive-accelerating “traversal” experiences, complete with Spider-Man’s own action, and even more visually stunning - and that’s just a small taste of what’s to come in the game. It’s just a blip in the game that doesn’t go much further than that, and it’s the kind of stunning one-off experience that’s so lavish in an open-world game that it’s almost a waste.</p><p><img src="https://s2.loli.net/2023/10/30/Y6IXpRBKbcsVdjG.png" alt="image.png"></p><p><img src="https://s2.loli.net/2023/10/30/ydPXgnm9ViD2Q6v.png" alt="image.png"></p><h2 id="You’ll-travel-to-the-poles-and-back-to-New-York-in-an-instant"><a href="#You’ll-travel-to-the-poles-and-back-to-New-York-in-an-instant" class="headerlink" title="You’ll travel to the poles and back to New York in an instant"></a>You’ll travel to the poles and back to New York in an instant</h2><p>What’s even rarer is that the main quest is not only comfortably paced, but the story is also good enough. If you take the highest Marvel movie score of 10, the plot of this Spider-Man 2 game is at least a bottom 7 - a very high score for a game. This is partly due to the excellent expression capture, which makes the “actors” much more tense. Spider-Man 2’s facial modeling isn’t particularly good, but the vivid details of the expressions are definitely in the top tier.</p><p><img src="https://s2.loli.net/2023/10/30/djL7KMIt8ZQ5pX1.png" alt="image.png"></p><p>So, even though I was severely fatigued by the main gameplay, I eventually made it through Spider-Man 2. You can certainly continue to categorize Spider-Man 2 as a formulaic open-world game, or a canned game, as it does have a lot of repetitive laboring or collecting elements scattered around the map, which is a typical element of formulaic open-worlds - there’s always something to fill the huge maps. But after contributing some of the best plot performances in open-world games in recent years, Spider-Man 2 also proves that there’s a cloudy difference between canned and jarred.</p><p><img src="https://s2.loli.net/2023/10/30/UG9skL6RbgfJtjp.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">The initial generation of Spider-Man for PS4 was released in 2018. At that time, gamers were gradually getting tired of &quot;canned games&quot;</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    
    <category term="Spider-Man" scheme="https://www.nablepart.com/tags/Spider-Man/"/>
    
    <category term="PS4" scheme="https://www.nablepart.com/tags/PS4/"/>
    
    <category term="Assassin&#39;s Creed" scheme="https://www.nablepart.com/tags/Assassin-s-Creed/"/>
    
  </entry>
  
  <entry>
    <title>Surrounding Li Jiaqi.</title>
    <link href="https://www.nablepart.com/66e333d7d9ee/"/>
    <id>https://www.nablepart.com/66e333d7d9ee/</id>
    <published>2023-09-17T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>When I entered Li Jiaqi’s live broadcasting room after 7pm last night, he was introducing a down jacket with his little assistant.</p><p>I don’t know what the original price was, but the price of this down jacket is now knocked down to 1,899 yuan, and with the double eleven discount, it is now only 1,499 yuan. The complex wording of the commodity page, the word Li Jiaqi live broadcasting room is emphasized, it takes a little bit of eyesight to see clearly the brand of this down jacket from Bosideng.</p><p>Exit from the product page to return to the live broadcast, there are more than 10 million people with the scene. Li Jiaqi wore a light blue top, the color of the clothes is very similar to the color of the background of the live broadcast room when he last stood up to apologize.</p><p>More than a month after the “Huaxizi incident”, the moment of the double eleven, Li Jiaqi once again on the cusp of the wind and waves.</p><p>This time the protagonist of the brand has become the home appliance brand Hai.</p><p>This year’s Jingdong double eleven in October 23rd at 8:00 pm to open, the next day at noon, Jingdong “self-supporting baking group” of a procurement and marketing staff sent a long circle of friends, he received the brand Hai’s lawyer’s letter, the reason is that a Hai’s oven in the price of Jingdong sold cheaper than in the live broadcast of Li Jiaqi, but due to Hai’s with the Li Jiaqi side of a paper between the existence of a “reserve price agreement”, which makes the sea to compensate a huge amount of liquidated damages.</p><p>The company’s website has been updated with the latest information about the company’s website and its website.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291914580.jpeg"></p><p>But the salesman is also very puzzled, he said the price of the dip is completely at the expense of departmental gross profit at the expense of. Put this oven to the lowest price in the whole network, the loss is only Jingdong, even if the loss of money.</p><p>There are three parties in the controversy, Jingdong lit a fire, Li Jiaqi behind the company’s response to the United States ONE denied the “bottom price agreement” statement, and said Li Jiaqi live in the commodity pricing right lies in the brand itself.</p><p>The remaining party is Hai’s, Hai’s response is and Li Jiaqi stand together.</p><p>Hai’s in response to the statement that the brand has never signed any “low price agreement” with Li Jiaqi live, and said that the lowest selling price during the double eleven is the whole network to pull together, and even to the General Administration of Market Supervision reported to the real name of Jingdong.</p><p>This seems to be a dispute between the brand and the platform, and Li Jiaqi was just inadvertently involved. However, half a day later, Sina Technology exposed the live broadcast promotion service contract of Mei ONE.</p><p>This contract has a “special protection clause”, which states that the brand needs to ensure that it through the designated person in the framework of this contract under the framework of the two sides agreed to all promotional services under the promotional efforts for the guarantee period in the guarantee range under the same conditions of the maximum strength.</p><p>The “best price” guarantee scope for the Tao system platform (including but not limited to Taobao &#x2F; Tmall stores, anchor live Tao system content channels), other e-commerce platforms and offline channels.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291914915.png"></p><p>Based on this contract, the brand is in breach of contract if one of its items sells more expensively in Li Jiaqi’s live broadcast than in Taobao stores or in the brand’s own offline stores, or than on the Jingdong platform.</p><p>The consequence of the breach of contract is to pay compensation of RMB 2 million to USONE, and to bear all the costs and losses incurred due to the refund of the price difference.</p><p>This seems to confirm the existence of the “reserve price agreement”, but this contract has not been recognized by the United States ONE.</p><p>Controversy back to Li Jiaqi, October 24 this day for Li Jiaqi siege is not the only thing Hai.</p><p>Professional counterfeiter Wang Hai released a jittery voice on October 24, reflecting that a consumer purchased a Hetian jade necklace priced at nearly 600 yuan in Li Jiaqi’s live room the day before, although the necklace was endorsed with a certificate of authentication of Hetian jade. But NGTC (National Jewelry and Jade Inspection Group Co., Ltd.) appraisal of the necklace is not Hetian jade, but carbonate-tremolite jade necklace.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291926184.jpeg"></p><p>Even live crazy, slicing crazy Shake Tone, do live with goods to be used and Simba analogous to the crazy little Yang brother, stood to Li Jiaqi side are just like a positive figure.</p><p>October 24, “crazy little Yang brother” live in the middle of the room, a lamps and lanterns were suddenly brand shelves, allegedly because the price of goods lower than Li Jiaqi live. Two brothers in the big Yang brother in the goods off the shelves said angrily, Li Jiaqi to the head of the anchor’s influence to control the price and even control the inventory, and when sold through Li Jiaqi live the number of commodities more and more, its influence on the brand side of the greater, the brand side of the control also seems to be like a snowball rolled up.</p><p>This will have the outside world for the “bottom price agreement” question, which is essentially live with goods from the beginning as a form of sales to enhance the consumer experience to now, began to override the platform and the brand above the two, and lead to the imbalance of the pricing power game.</p><p>Perhaps another unrelated Li Jiaqi’s story can also see some similarities.</p><p>The founder of the original leaf tea brand Chabiubiu, Wang Yuhui, recently posted an article “asking for help”, due to the Boxmart product line from 300 brands on the shelves downsized to 100, Chabiubiu formally by the Boxmart off the shelves, the goods were required to clear away by a deadline.</p><p>This is the last straw that crushed the brand, but before that, the strong but distorted influence exerted by the live band to the new consumer had already made the whole brand’s operating rhythm become deformed.</p><p>“At that time no one advised me not to do live with goods, because it can indeed pull sales. However, after subtracting the activities of the preferential, anchor commissions, pit fees, loss is very thorough; big anchor there to make money at a loss, the brand simply do not dare to offend those big anchors, the big anchors simply do not want to bring the brand, they can turn around and go to foster your competitors, do you dare to go? A brand in our industry at that time a month live spent 20 million a full case, counting Roi, but also about 1, but the momentum up, we are hostage to do to leave a point of sound, the big anchor is too pit, then we can only do small and medium-sized anchors. *The first thing I’d like to say is that I don’t know what to do.</p><p>small and medium-sized anchor pit fee is really cheap a lot, a few thousand to tens of thousands of ranging, but they have put forward a new request, the price cut in half on the live broadcast! Because these small and medium-sized anchors do not have the ability to empower the brand, not to mention the loyal consumer fans, so they can only rely on goods really cheap to attract viewers, but where to find so many cheap and good goods? It can only be another bitter businessmen. Price cut in half, and then shaved off 30%-40% commission, that is to do a single lose a single.</p><p>For the brand, the price down, can never go back, no profit, basically not long, so many brands change the packaging, reduce the quality, cost reduction, so at least will not lose; but I’m not willing to go in order to cater to the market to reduce the standard of materials, into a “low-priced brand”, I am very determined not to do bags of foam, I’m still willing to believe in consumer upgrades, so we I’m still willing to believe in consumer upgrades, so we gave up the live band. The final result is that we did a few million of the whole case put, the splash did not come out, the blood loss situation, it already shows that the market efficiency has been low to a certain extent.”</p><p>Then Chabiubiu gradually turned from a profitable store to a store where the percentage of placement went from 30%, 50% to a hollowed out 80% at its peak.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291928440.jpeg"></p><p>Chabiubiu is one of the countless brands trapped in the live room. Its dream of a star company ends by 2022, but overall, there are few examples of new consumer tracks that have managed to catch a star company on the shore.</p><p>But recently was frequently associated with Li Jiaqi flower West certainly counts as one.</p><p>On the night of September 10, Li Jiaqi in the live broadcast yell flower West a 79 yuan eyebrow pencil angered consumers, in the face of the live shouting expensive consumers to make a “sometimes find their own reasons, so many years of wages have not risen,” such an embarrassment. The ensuing anger of consumers, Li Jiaqi’s tearful apology, and the disappearance of Huaxizi in Li Jiaqi’s live broadcast for 43 days.</p><p>In 2018, the ONE company transformed around the IP of “Li Jiaqi”. The latter shouted “OMG” began to go red, the same period of flowers Xizi finally made up his mind to do live e-commerce. 2019 flowers Xizi first walked into the Li Jiaqi live room, a flower Xizi air powder in the Li Jiaqi live room after the debut of the first, became the year of “double 11” shop sales first place, “shop sales first place,” the first place. “Shop sales first place, sold more than 20,000 pieces.</p><p>Then a hair out of hand, flower West 2019 sales of 1 billion yuan; the next year and tripled. By the time 2021 rolls around, Hua Xi Zi’s realized sales have reached a whopping 5.4 billion yuan, and its market share has grown from an initial 0.3% in 2017 to 6.8%.</p><p>The star brand’s tie to Li Jiaqi is also deepening further, with Globe analysis revealing that in that $79 eyebrow pencil, Huaxizi takes $4 in profit, and then skimming off the manufacturing costs and supply chain expenditures, Li Jiaqi siphons off $63. Rumor has it that Huaxizi’s rebate to Li Jiaqi is as high as 60%-80%, or even more than 100%. This statement was denied by the flower West side.</p><p>Also on October 24, the same day that all parties surrounded Li Jiaqi, Hua Xizi took the opportunity to return to the live broadcast of Li Jiaqi’s double eleven broadcast, but the selection is not in that eyebrow pencil.</p>]]></content>
    
    
    <summary type="html">When I entered Li Jiaqi&#39;s live broadcasting room after 7pm last night, he was introducing a down jacket with his little assistant.</summary>
    
    
    
    <category term="Financial" scheme="https://www.nablepart.com/categories/Financial/"/>
    
    
    <category term="Financial" scheme="https://www.nablepart.com/tags/Financial/"/>
    
    <category term="Li Jiaqi" scheme="https://www.nablepart.com/tags/Li-Jiaqi/"/>
    
    <category term="Live" scheme="https://www.nablepart.com/tags/Live/"/>
    
    <category term="E-commerce" scheme="https://www.nablepart.com/tags/E-commerce/"/>
    
    <category term="Huaxizi" scheme="https://www.nablepart.com/tags/Huaxizi/"/>
    
  </entry>
  
  <entry>
    <title>Exploring Flow Crypto, the Next Wave of Digital Currency</title>
    <link href="https://www.nablepart.com/d33201955838/"/>
    <id>https://www.nablepart.com/d33201955838/</id>
    <published>2023-09-16T00:10:17.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p>If are you a fan of apps like Alien Puppies and Mars Genesis, then you must be versed with Flow, the powerful blockchain that’s the engine of popular applications in crypto, NFTs, and gaming. Launched by Dapper Labs in 2019, Flow is quickly gaining pace as the go-to blockchain for businesses to build and grow their user base. Flow blockchain is focused on low transaction fees and a truly user-centric experience, therefore it positions itself as the go-to source that will help build the next generation of apps, games, and digital assets. But what is flow crypto, and why should you use it? In this article, we will focus on FLOW.</p><h2 id="Defining-Flow-Crypto"><a href="#Defining-Flow-Crypto" class="headerlink" title="Defining Flow Crypto"></a>Defining Flow Crypto</h2><p>FLOW is a traditional cryptocurrency of the Flow blockchain. It facilitates payment for transactions, stakes rewards, and helps safeguard the network. FLOW is also used to develop and control dApps on the Flow blockchain. Flow is one the number one blockchain for building the next generation of apps, NFTs, and games. It’s meant to work in a way free of sharding techniques, so transactions run fast and smoothly. Importantly, Flow crypto is designed to reduce the technicalities found in other blockchains facilitating transactions. The Flow blockchain will not simply enhance the end-user experience but also act as a field for developers to build, test, and launch projects quickly and efficiently.</p><h2 id="How-does-Flow-Crypto-work"><a href="#How-does-Flow-Crypto-work" class="headerlink" title="How does Flow Crypto work?"></a>How does Flow Crypto work?</h2><p>Each blockchain has its own technique of validation to process transactions and safeguard the network. For instance, the Ethereum blockchain’s Proof-of-Stake (PoS) allows for a decentralized and secure network. However, it experiences difficulties in terms of scaling and processing large-scale transactions while lowering costs. This has resulted in the creation of Layer 2 solutions that can run many orders off-chain more quickly and reduce transaction fees than the mainnet. Flow, on the other hand, divides the burden of processing transactions into four separate nodes. Each node has its own role and responsibility. Rather than rely on off-chain solutions to scale the network, Flow’s division of nodes makes it more stable in blockchain scalability.</p><h2 id="The-benefits-of-the-Flow-blockchain"><a href="#The-benefits-of-the-Flow-blockchain" class="headerlink" title="The benefits of the Flow blockchain"></a>The benefits of the Flow blockchain</h2><h3 id="Flow-is-user-friendly-and-mainstream-adaptable"><a href="#Flow-is-user-friendly-and-mainstream-adaptable" class="headerlink" title="Flow is user-friendly and mainstream adaptable"></a>Flow is user-friendly and mainstream adaptable</h3><p>Flow crypto is built to be mainstream-ready, making it highly developer-friendly and easy for novice crypto users to get involved. For example, the Flow network allows users to easily recover lost keys. Furthermore, Flow crypto allows users to experience only fewer steps when implementing their favorite projects. This has led companies such as Dapper Labs to use Flow for all their portfolio projects.</p><h3 id="Flow-is-void-of-sharding"><a href="#Flow-is-void-of-sharding" class="headerlink" title="Flow is void of sharding"></a>Flow is void of sharding</h3><p>According to flow, Smart contracts and user accounts on their platform can always interact with each other in one atomic, consistent, isolated, and durable (ACID) transaction. This means that all applications on the Flow blockchain can run in the same shared execution state. Sharding and layer 2 solutions are elements that decrease the network effects for dapps. While sharding is beneficial in some cases, it’s not a permanent solution. Flow avoids sharding when integrating blockchain, allowing it to bypass some of the barriers.</p><h3 id="Flow-focuses-on-decentralization"><a href="#Flow-focuses-on-decentralization" class="headerlink" title="Flow focuses on decentralization"></a>Flow focuses on decentralization</h3><p>Flow crypto provides an incredibly easy way for developers and users to participate in the Flow ecosystem. This paves the way for more individuals to participate in the consensus process that safeguards the network. According to the Flow network, they take priority in committing to a diverse and decentralized participation in the Flow Network. This means that distribute their token in compliance with securities law and other relevant regulatory frameworks.</p><h3 id="Flow-is-backed-by-some-of-the-world’s-biggest-investors-and-brands"><a href="#Flow-is-backed-by-some-of-the-world’s-biggest-investors-and-brands" class="headerlink" title="Flow is backed by some of the world’s biggest investors and brands"></a>Flow is backed by some of the world’s biggest investors and brands</h3><p>Flow crypto is under the backing of powerful and well-respected investors, ensuring long-term growth and a sustainable ecosystem. Both the Flow blockchain and FLOW token constantly expand their growth and with more people knowing about flow crypto, it’s in a good position to become a major player in the mainstream adoption of cryptocurrency.</p><h2 id="How-to-earn-newly-created-Flow-Tokens"><a href="#How-to-earn-newly-created-Flow-Tokens" class="headerlink" title="How to earn newly created Flow Tokens"></a>How to earn newly created Flow Tokens</h2><p>According to the project papers, users can earn Flow tokens in different ways. Around 1.25 billion FLOW tokens are available. Users can earn a token primarily by being a validator in the blockchain. A validator receives newly minted FLOW as a reward for running a node on the network, also known as mining. Furthermore, users can receive newly minted FLOW as a reward for making apps on the Flow blockchain. Also, an individual in possession of a FLOW token gets holding rewards. Additionally, users can stake their Flow tokens on the network and gain new FLOW tokens in return.</p><h2 id="Is-Flow-Crypto-a-Good-Investment"><a href="#Is-Flow-Crypto-a-Good-Investment" class="headerlink" title="Is Flow Crypto a Good Investment?"></a>Is Flow Crypto a Good Investment?</h2><p>Whether you consider FLOW crypto as a good investment depends on you. There are several reasons why you should not trip on flow crypto. FLOW is a well-structured cryptocurrency with numerous benefits when compared to other blockchains. What makes Flow Crypto great is that it is scalable, secure, and user-friendly, making it a good choice for dApps that want to gain large audiences. Secondly, FLOW has the backing of a strong team of developers and investors. The technical staff of FLOW includes some of the most experienced and talented individuals in the blockchain industry, and they have a reputation for success. Furthermore, FLOW has several partnerships with major brands which could help to drive the adoption of FLOW and the Flow blockchain. All these reasons make flow somewhat good for investment. However, keep in mind that investing in flow will require patience as with other blockchains. However, despite the advantages of investing in flow crypto, there are also some risks associated with investing in FLOW crypto. The cryptocurrency market is very volatile, and the price of FLOW could easily reach extreme lows and highs. Furthermore, FLOW is a new cryptocurrency, and it is yet to establish its spot as Bitcoin.</p><h2 id="Bottom-line"><a href="#Bottom-line" class="headerlink" title="Bottom line"></a>Bottom line</h2><p>FLOW crypto is showing a lot of promise with several potential advantages. However, there are numerous risks associated with investing in FLOW crypto. Therefore, be mentally and financially ready before investing. Ultimately, the decision to invest in flow crypto is a matter of opinion. Regardless, FLOW crypto has the potential to be a good investment but conduct your own research before investing in any cryptocurrency.</p>]]></content>
    
    
    <summary type="html">FLOW is a traditional cryptocurrency of the Flow blockchain. It facilitates payment for transactions, stakes rewards, and helps safeguard the network</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="crypto" scheme="https://www.nablepart.com/tags/crypto/"/>
    
    <category term="bnb" scheme="https://www.nablepart.com/tags/bnb/"/>
    
    <category term="$bnb" scheme="https://www.nablepart.com/tags/bnb/"/>
    
    <category term="bnb chain" scheme="https://www.nablepart.com/tags/bnb-chain/"/>
    
    <category term="binance" scheme="https://www.nablepart.com/tags/binance/"/>
    
  </entry>
  
  <entry>
    <title>BNB Greenfield: the new public chain that bridges storage and compute</title>
    <link href="https://www.nablepart.com/573f029d9601/"/>
    <id>https://www.nablepart.com/573f029d9601/</id>
    <published>2023-09-15T00:10:17.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="BNB-Chain"><a href="#BNB-Chain" class="headerlink" title="BNB Chain"></a>BNB Chain</h2><p>On the evening of February 1, BNB Chain released the BNB Greenfield whitepaper. This is the third chain in the BNB ecosystem after BNB Beacon Chain and BNB Chain.<br>For developers and users, the most interesting thing about this chain is that it has both storage and computing capabilities.</p><p>For developers, the native storage function allows dApps that need data storage to avoid developing on two infrastructures. It is also more convenient for users who need to use storage, as both storage and compute are based on the same account, making it clear that the user’s data is sovereign.</p><p>This innovation may lead to an imaginative number of new use cases in the future, and its wide range of scenarios and applications will open up new avenues of value capture for BNB.</p><h2 id="What-is-BNB-Greenfield"><a href="#What-is-BNB-Greenfield" class="headerlink" title="What is BNB Greenfield?"></a>What is BNB Greenfield?</h2><p>What are the cornerstones of a decentralized Internet world? Assets, computation, and storage.</p><p>The emergence of new crypto-assets since the birth of Bitcoin has provided a vehicle for the flow of value in a decentralized world. The increasingly powerful public chains provide the computing power necessary for the decentralized world.</p><p>Smart contracts became an essential part of the crypto world when Turing-complete ethereum brought smart contracts to the crypto world. Smart contracts with computing power bring programmability and composability to the crypto world, which is the cornerstone of DeFi and NFT’s prosperity.</p><p><img src="https://s2.loli.net/2023/09/19/bxZ9E2UldpfaJ1I.png"></p><p>And to bring complete decentralized services to users and ultimately transition our internet world to a fully decentralized one, I’m afraid the chain is still missing an important piece - storage.</p><p>Although existing decentralized storage solutions can solve many problems, such as data privacy protection and censorship resistance, the current decentralized system is still not mature enough.</p><p>In this context, BNB Greenfield, as a decentralized data storage system with Web3 application smart contract integration, aims to revolutionize the data ownership economy and bring new standards of utility to Web3.</p><p>In terms of specific functions, BNB Greenfield will be the distributed storage infrastructure of the BNB Chain. With BNB Greenfield, anyone with a BNB Chain address and a BNB can seamlessly store data and deploy websites on BNBGreenfield using DropBox.</p><p>BNB Greenfield uses an API interface similar to AWS S3, which allows users to programmatically manipulate its data, as well as store historical data from the BNB Smart Chain and infrastructure data from other BNB ecosystems. Unlike most decentralized storage, which places more emphasis on tight integration with the crypto world, BNB Greenfield was designed with the vast Web2 world in mind, and in addition to adopting a more mainstream API, its storage service will be denominated in US dollars, but payments will still be made in BNB.</p><p><img src="https://s2.loli.net/2023/09/19/sHKxtFXgEhyje5V.png"></p><p>The test network of Web3 infrastructure built by the BNB Chain core team is supported by community development teams from Amazon Web Services, NodeReal and Blockdaemon.</p><h2 id="From-1-to-100-what-are-the-remaining-problems-with-distributed-storage"><a href="#From-1-to-100-what-are-the-remaining-problems-with-distributed-storage" class="headerlink" title="From 1 to 100, what are the remaining problems with distributed storage?"></a>From 1 to 100, what are the remaining problems with distributed storage?</h2><p>To solve the storage problem, the crypto world has been actively exploring solutions. Decentralized storage is arguably a topic with a long history.<br>From the early days of IPFS, Filecoin, Swarm, to the rise of Arweave in the current bull market, the need to store data on the chain has been largely satisfied. However, the progress from 0 to 1 does not mean that the infrastructure of this sector is “good enough”.</p><p>In the early days of decentralized storage (one might even call it “prehistoric”), the Ether network was used for distributed storage. Although Ether could not be used to store files, “data” could be written to Ether in a single transaction. For this reason, there have been many hobbyists who have converted images into base 64 strings and stored them on Ether. However, when it comes to storing large amounts of data, Ether just doesn’t fit the bill. Moreover, deploying larger data to the main network would be very expensive due to gas charges.</p><p>In response, a number of decentralized storage networks have been created.</p><p>Take IPFS (InterPlanetary File System), for example, a protocol that came online back in 2015. The development team mainly benchmarked it against the Internet’s HTTP protocol, aiming to complement or even replace it. The vision is very ambitious and very generalizable. The downside of this infrastructure, however, is that the network is too decentralized, and the peer-to-peer mechanism makes it difficult to perpetuate the files saved by IPFS. It also lacks incentives due to the lack of a cleverly introduced economic system.</p><p><img src="https://s2.loli.net/2023/09/19/u5O27dHkU8JRqiE.png"></p><p>Filecoin, a public chain system with incentives based on the IPFS protocol, makes a change. It serves as both the storage layer of IPFS and the incentive layer of the IPFS protocol, and IPFS is the application layer of the whole system. both Filecoin and IPFS were developed by Protocol Labs, and the two protocols share several functional modules.</p><p>Filecoin and IPFS are both developed by Protocol Labs, and the two protocols share several functional modules. However, Filecoin mining hardware requirements are higher, and there is too much junk data on the network. Its performance and download speeds are also a hindrance to its development. The program’s obsolescence has also caused many of its product concepts to deviate from the existing state of affairs, losing the traffic and attention it once enjoyed.</p><p>In 2018, Arweave went online. Arweave’s rise in the recent bull market ushered in a “new king” in the long-dormant storage sector.Arweave’s aim is to store data reliably over the long term, offering a storage solution called the Permaweb permanent network. The network does not use a blockchain structure, but rather blockweave. On the network, users can pay a one-time fee and receive an agreement for permanent file storage, in order to realize truly permanent data storage for the first time.</p><p>In terms of landing scenarios, the characteristics of Arweave make its scope of application narrower, and it is more used for storing art resources that are used statically, such as the pictures of some dApps, NFT pictures, and so on.<br>Taking a comprehensive view of the entire decentralized storage track, the applicable scenarios and ecological applications are the common limitations of almost all storage networks. At present, decentralized storage is more often used to store some art resources with low coupling (such as NFT small pictures). Many Web3 projects do not necessarily have a connection between computation and decentralized storage.</p><p>If a dApp has to read and write to the storage network at a high frequency, the developer will have to develop across two networks (the chain where the smart contract resides and the chain where the storage resides). What’s more, users can’t personally control data on two different networks at the same time with a single chain ID.</p><h2 id="BNB-Greenfield-will-solve-this-problem-for-us"><a href="#BNB-Greenfield-will-solve-this-problem-for-us" class="headerlink" title="BNB Greenfield will solve this problem for us."></a>BNB Greenfield will solve this problem for us.</h2><p>In short, BNB Greenfield is a decentralized storage network for EVMs. It can fulfill both computing and storage needs at the same time.</p><h2 id="The-Imagination-of-“Storage-Computing”"><a href="#The-Imagination-of-“Storage-Computing”" class="headerlink" title="The Imagination of “Storage + Computing”"></a>The Imagination of “Storage + Computing”</h2><p>Based on the most basic characteristics of the blockchain, users can have autonomy over tokens through their accounts. However, when the use case is expanded to a wider range of applications, some problems arise. For example, it is not possible for users to prove their sovereignty over data on another network (the storage network) just by using the ID of one network.</p><p>Smart contracts are not new, but in existing decentralized networks, be it mainstream public chains, new public chains, or L2, they only bring a better use of “computing power”. Although it is possible to connect with other storage networks, this kind of non-native cross-network development always brings developers some trouble and unfriendly user experience.</p><p><img src="https://s2.loli.net/2023/09/19/m3sJABjtz8a5MLF.png"></p><p>Smart contracts and decentralized storage would be a milestone for Web3 if they could be integrated natively from the start and work with the large dApp ecosystem that already exists.</p><p>With the release of BNB Greenfield, the chain gives developers the ability to do both storage and computation on the same chain. This ability enables many on-chain and off-chain applications.</p><p>Within BNB Greenfield, users will be able to create, read, share and even execute data with a user experience and cost close to that of the popular Web2 cloud storage service. Users can fully own their data assets and decide who can use them and how they can use them. Users’ data assets can be easily placed into a broad, smart contract-based economy to gain financial value.</p><p>According to Victor Genin, Senior Solutions Architect at BNB Chain, “ 2021 is the year of DeFi’s breakthrough. 2022 sees the rise of NFT and the decentralization of digital ownership. And in 2023, with BNB Greenfield, BNB Chain will create a new theme for data ownership and utility. bnb Greenfield will bring utility and financialization opportunities to data in storage and programmability to data ownership.”</p><p>BNB Greenfield’s native integration with the BNB Smart Chain opens up a wealth of imaginative applications for the future. By allowing smart contracts to interact with a user’s own data assets, both ownership and read access can be managed financially through the NFT by the EOA wallet on the BNB Smartchain. The EOA can not only manage the NFT on behalf of the data, but also the data itself. Native cross<br>-chain protocols can facilitate Web3’s concept of “data ownership”.</p><h2 id="for-example"><a href="#for-example" class="headerlink" title="for example"></a>for example</h2><ol><li><p>Authors can digitally publish and sell their works directly on the BNB smart chain through smart contracts. 2;</p></li><li><p>Data creators can upload and exchange their products in smart contracts and combine them with other DeFi;</p></li><li><p>Decentralized social media can be built on BNB Greenfield. Users can own their data on BNB Greenfield and store their social data in a decentralized way, while different social media front-ends facilitate users to build social networks. By extrapolating from this, just benchmarking against Web2, we can find many potential use cases such as decentralized Twitter, Tik Tok, FaceBook, and so on.</p></li><li><p>Decentralized subscription system. Through decentralized storage to save content, and through native smart contracts to restrict permissions, we can derive a large number of potential use cases, such as paid blogs, member access control, and so on.</p></li></ol><p>The unification of storage and computation makes development easier for developers and unlocks many previously unattainable or difficult-to-achieve features. Based on this, more scenarios have completely decentralized solutions.</p><p>Not surprisingly, the track of decentralized storage with a wide range of potential use cases has a handful of strong players. Though they all have different specific use cases and in doing so occupy unique market ecosystems. However, this segment also lacks much in the way of “modern” capabilities, both in terms of poorly designed value capture mechanisms and multi-chain based development experiences, requiring a more “modern” redesign of the storage network to better serve users. BNB Greenfield is targeting this segment.</p><h2 id="New-Public-Chain-Enhances-Value-Capture-for-BNB"><a href="#New-Public-Chain-Enhances-Value-Capture-for-BNB" class="headerlink" title="New Public Chain Enhances Value Capture for BNB"></a>New Public Chain Enhances Value Capture for BNB</h2><p>Just like BNB Beacon Chain and BNB Chain, BNB Greenfield will also be supported by BNB. This is already the third chain supported by BNB, and the launch of BNB Greenfield will once again broaden the usage scenarios of BNB, and the native storage function will also bring more application value to BNB.</p><p>In February 2022, after the reintegration of BNB Chain, its structure will be comparable to that of Ether, with BNB Beacon Chain (the former CoinAn Chain) providing a secure technical base layer and BNB Chain (the former CoinAn Smart Chain) actually becoming the execution layer. In its ecosystem, Dapp covers many sectors, including DeFi, NFT, GameFi, Metaverse, cross-chain, derivatives, infrastructure, and so on, covering almost all areas of the on-chain world.</p><p><img src="https://s2.loli.net/2023/09/19/YSycRkq5viWaJAU.png"></p><p>However, in the upcoming Web3 era, an important segment is still not covered by BNB, that is, the “storage” function launched this time. When Web3 is adopted on a large scale, the storage of a large amount of important data cannot be safely handed over to centralized services. It can be said that BNB Greenfield has put together an important piece of the BNB map.</p><p>Before Greenfield, BNB already had multiple attributes. With the introduction of this storage network, we can see the ambition of the BNB Chain to bet on the large-scale adoption of Web3. Upgrading from a single chain to multiple chains and increasing throughput, this is a more independent and complete Layer 1 decentralized network ecosystem. With the expansion of the scope of use cases, BNB will also benefit from the growth dividend brought by the development of the ecosystem, and capture value across CeFi and DeFi.</p>]]></content>
    
    
    <summary type="html">The origin and development of bnb chain, why bnb chain can be in the future by a greater space for development, this article takes you to analyze together.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="crypto" scheme="https://www.nablepart.com/tags/crypto/"/>
    
    <category term="bnb" scheme="https://www.nablepart.com/tags/bnb/"/>
    
    <category term="$bnb" scheme="https://www.nablepart.com/tags/bnb/"/>
    
    <category term="bnb chain" scheme="https://www.nablepart.com/tags/bnb-chain/"/>
    
    <category term="binance" scheme="https://www.nablepart.com/tags/binance/"/>
    
  </entry>
  
  <entry>
    <title>Ether PoA consensus method out of the block</title>
    <link href="https://www.nablepart.com/ec2dd9dcb7be/"/>
    <id>https://www.nablepart.com/ec2dd9dcb7be/</id>
    <published>2023-09-10T13:13:03.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>The Ethernet POW method of block out is a workload method of block out. Blocks will also be out when there is no transaction, and for smart contract users, empty fast will waste a lot of disk space.<br>POA is Proof of Asset, which is realized by adding the following parameters in the Genesis block configuration file.</p><p>You can modify the period “period”: 0, so as to realize that when there is no transaction, no block will be released, thus saving disk space.</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"> <span class="attr">&quot;clique&quot;</span><span class="punctuation">:</span> <span class="punctuation">&#123;</span></span><br><span class="line">      <span class="attr">&quot;period&quot;</span><span class="punctuation">:</span> <span class="number">0</span><span class="punctuation">,</span>  <span class="comment">// The unit is second. The default value is 15 seconds. If the change value is 0, no block will be produced when there is no transaction</span></span><br><span class="line">      <span class="attr">&quot;epoch&quot;</span><span class="punctuation">:</span> <span class="number">30000</span> <span class="comment">// The unit is the number of blocks. The default value is 30,000 blocks.</span></span><br><span class="line">    <span class="punctuation">&#125;</span></span><br></pre></td></tr></table></figure><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="string">&quot;extraData&quot;</span>: <span class="string">&quot;0x000000000000000000000000000000000000000000000000000000000000000004819FcA652AD35F9cD688aAAfa53aD61DDA21990000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000&quot;</span>,      <span class="comment">//Here set block account 64 bits + account +65*2 bits;</span></span><br></pre></td></tr></table></figure><p>Complete Ethereum po a consensus mode blocks complete configuration, configuration file genesis.json</p><figure class="highlight js"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line">&#123;</span><br><span class="line"></span><br><span class="line">    <span class="string">&quot;config&quot;</span>: &#123;</span><br><span class="line">    <span class="string">&quot;chainId&quot;</span>: <span class="number">456719</span>,</span><br><span class="line">    <span class="string">&quot;homesteadBlock&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;eip150Block&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;eip155Block&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;eip158Block&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;byzantiumBlock&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;constantinopleBlock&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;petersburgBlock&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;istanbulBlock&quot;</span>: <span class="number">0</span>,</span><br><span class="line">    <span class="string">&quot;clique&quot;</span>: &#123;</span><br><span class="line">      <span class="string">&quot;period&quot;</span>: <span class="number">0</span>,  <span class="comment">// The unit is second. The default value is 15 seconds. If the change value is 0, no block will be produced when there is no transaction</span></span><br><span class="line">      <span class="string">&quot;epoch&quot;</span>: <span class="number">30000</span> <span class="comment">// The unit is the number of blocks. The default value is 30,000 blocks.</span></span><br><span class="line">    &#125;</span><br><span class="line">  &#125;,</span><br><span class="line">  <span class="string">&quot;nonce&quot;</span>: <span class="string">&quot;0x0000000000000042&quot;</span>,</span><br><span class="line">  <span class="string">&quot;mixhash&quot;</span>: <span class="string">&quot;0x0000000000000000000000000000000000000000000000000000000000000000&quot;</span>,</span><br><span class="line">  <span class="string">&quot;difficulty&quot;</span>: <span class="string">&quot;0x00100000&quot;</span>,</span><br><span class="line">  <span class="string">&quot;coinbase&quot;</span>: <span class="string">&quot;0x3333333333333333333333333333333333333333&quot;</span>,</span><br><span class="line">  <span class="string">&quot;timestamp&quot;</span>: <span class="string">&quot;0x0&quot;</span>,</span><br><span class="line">  <span class="string">&quot;parentHash&quot;</span>: <span class="string">&quot;0x0000000000000000000000000000000000000000000000000000000000000000&quot;</span>,</span><br><span class="line">  <span class="string">&quot;extraData&quot;</span>: <span class="string">&quot;0x000000000000000000000000000000000000000000000000000000000000000004819FcA652AD35F9cD688aAAfa53aD61DDA21990000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000&quot;</span>,</span><br><span class="line">  <span class="string">&quot;gasLimit&quot;</span>: <span class="string">&quot;0x8000000&quot;</span>,</span><br><span class="line">  <span class="string">&quot;alloc&quot;</span>: &#123;</span><br><span class="line">    <span class="string">&quot;0x04819FcA652AD35F9cD688aAAfa53aD61DDA2199&quot;</span>: &#123;</span><br><span class="line">      <span class="string">&quot;balance&quot;</span>: <span class="string">&quot;99999999999999999999&quot;</span></span><br><span class="line">    &#125;,</span><br><span class="line">    <span class="string">&quot;0x59fE0911B06B1Fe0136744D383502da57c0c1229&quot;</span>: &#123;</span><br><span class="line">      <span class="string">&quot;balance&quot;</span>: <span class="string">&quot;99999999999999999999&quot;</span></span><br><span class="line">    &#125;</span><br><span class="line">  &#125;</span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>verify</p><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">&gt; eth.<span class="title function_ invoke__">sendTransaction</span>(&#123; <span class="attr">from</span>:eth.coinbase, <span class="attr">to</span>:<span class="string">&quot;0x59fE0911B06B1Fe0136744D383502da57c0c1229&quot;</span>,<span class="attr">value</span>: <span class="number">1</span>&#125;);</span><br><span class="line"><span class="string">&quot;0x9fdde975c63cde9a2b741022a488b50ce1897c024476281b6461f3ea488f4750&quot;</span></span><br><span class="line"></span><br><span class="line">&gt; eth.<span class="title function_ invoke__">getTransactionReceipt</span>(<span class="string">&quot;0x9fdde975c63cde9a2b741022a488b50ce1897c024476281b6461f3ea488f4750&quot;</span>);</span><br><span class="line">&#123;</span><br><span class="line">  blockHash: <span class="string">&quot;0xc35d93dd73dac56075539fb87cee8f5b6c55ff3465cada34468dcd5891cd1a08&quot;</span>,</span><br><span class="line">  **blockNumber: <span class="number">1</span>,   <span class="comment">## This is the block number **</span></span><br><span class="line">  contractAddress: <span class="literal">null</span>,</span><br><span class="line">  cumulativeGasUsed: <span class="number">21000</span>,</span><br><span class="line">  effectiveGasPrice: <span class="number">1000000000</span>,</span><br><span class="line">  <span class="keyword">from</span>: <span class="string">&quot;0x04819fca652ad35f9cd688aaafa53ad61dda2199&quot;</span>,</span><br><span class="line">  gasUsed: <span class="number">21000</span>,</span><br><span class="line">  logs: [],</span><br><span class="line">  logsBloom: <span class="string">&quot;0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000&quot;</span>,</span><br><span class="line">  status: <span class="string">&quot;0x1&quot;</span>,</span><br><span class="line">  to: <span class="string">&quot;0x59fe0911b06b1fe0136744d383502da57c0c1229&quot;</span>,</span><br><span class="line">  transactionHash: <span class="string">&quot;0x9fdde975c63cde9a2b741022a488b50ce1897c024476281b6461f3ea488f4750&quot;</span>,</span><br><span class="line">  transactionIndex: <span class="number">0</span>,</span><br><span class="line">  type: <span class="string">&quot;0x0&quot;</span></span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>]]></content>
    
    
    <summary type="html">The Ethernet POW method of block out is a workload method of block out</summary>
    
    
    
    <category term="Cryptocurrency" scheme="https://www.nablepart.com/categories/Cryptocurrency/"/>
    
    
    <category term="cryptocurrency" scheme="https://www.nablepart.com/tags/cryptocurrency/"/>
    
    <category term="Defi" scheme="https://www.nablepart.com/tags/Defi/"/>
    
    <category term="ethereum" scheme="https://www.nablepart.com/tags/ethereum/"/>
    
    <category term="PoA" scheme="https://www.nablepart.com/tags/PoA/"/>
    
  </entry>
  
  <entry>
    <title>DeepSpeed: Large-scale model training framework.</title>
    <link href="https://www.nablepart.com/896852270e63/"/>
    <id>https://www.nablepart.com/896852270e63/</id>
    <published>2023-08-31T02:24:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Background"><a href="#Background" class="headerlink" title="Background"></a>Background</h2><p>Currently, the development of large models has been very hot, and the training and fine-tuning of large models is also a key focus of various companies. However, the pain point of large model training is that the model parameters are too large, easily tens of billions, and it is basically impossible to rely on a single GPU to complete the training. So you need to multi-card or distributed training to complete this work.</p><h2 id="I-Distributed-training"><a href="#I-Distributed-training" class="headerlink" title="I. Distributed training"></a>I. Distributed training</h2><p>1.1 The current mainstream distributed training of large models mainly includes two kinds:</p><ul><li>Data parallel training</li><li>model parallel training</li></ul><h2 id="DeepSpeed"><a href="#DeepSpeed" class="headerlink" title="DeepSpeed"></a>DeepSpeed</h2><p>DeepSpeed is a distributed training tool provided by Microsoft, designed to support larger models and provide more optimization strategies and tools. For the training of larger models, DeepSpeed provides more strategies, such as Zero, Offload, and so on.</p><h3 id="2-1-Basic-Components"><a href="#2-1-Basic-Components" class="headerlink" title="2.1 Basic Components"></a>2.1 Basic Components</h3><p>Distributed training requires mastering the basic configurations in a distributed environment, including node changes, global process numbers, local process numbers, total global process numbers, master nodes, and so on. All of these components are closely related to distributed training, and at the same time, there are also very big connections between the components, such as communication links and so on.</p><h3 id="2-2-Communication-strategy"><a href="#2-2-Communication-strategy" class="headerlink" title="2.2 Communication strategy"></a>2.2 Communication strategy</h3><p>Since it is distributed training, it is important to maintain communication between machines so that information such as model parameters, gradient parameters, etc. can be transferred.</p><p>DeepSpeed provides communication strategies such as mpi, gioo, nccl, and so on.</p><table><thead><tr><th>communication strategies</th><th>communication role</th></tr></thead><tbody><tr><td>mpi</td><td>It is a communication library for cross-boundary points, often used for distributed training on CPU clusters</td></tr><tr><td>gloo</td><td>It is a high-performance distributed training framework that can support distributed training on CPU or GPU</td></tr><tr><td>nccl</td><td>It is a GPU-specific communication library provided by nvidia and is widely used for distributed training on</td></tr><tr><td>GPU</td><td>nccl is a high-performance distributed training framework that supports distributed training on CPUs and GPUs.</td></tr></tbody></table><p>When we use DeepSpeed for distributed training, we can choose the appropriate communication library according to our own situation, usually, if it is GPU for distributed training, you can choose nccl.</p><h3 id="2-3-Zero-Zero-Redundancy-Optimizer"><a href="#2-3-Zero-Zero-Redundancy-Optimizer" class="headerlink" title="2.3 Zero (Zero Redundancy Optimizer)"></a>2.3 Zero (Zero Redundancy Optimizer)</h3><p>Microsoft developed Zero to address the limitations of data parallelism and model parallelism during distributed training. For example: Zero solves the problem of data parallelism into possible memory redundancy by dividing the model state (optimizer, gradient, parameters) during data parallelism (for normal data parallel training, all the parameters of the model are replicated on each machine); at the same time, it is possible to use a dynamic communication plan to share important state variables among distributed devices during training, so as to maintain the computational granularity and data communication in parallel.</p><p>Zero is a technique used for large-scale model training optimization, its main purpose is to reduce the memory footprint of the model, so that the model can be trained on the graphics card, the memory footprint is mainly divided into <strong>Model States</strong> and <strong>Activation</strong> two parts, Zero is mainly to solve the problem of the memory footprint of the Model States.</p><p>Zero divides the model parameters into three parts:</p><table><thead><tr><th>States</th><th>Actions</th></tr></thead><tbody><tr><td>Optimizer States</td><td>The data the optimizer needs to use when doing gradient updates</td></tr><tr><td>Gradient</td><td>The data generated during the backpropagation process, which determines the direction of the parameter update</td></tr><tr><td>Model Parameter</td><td>model parameter, the information “learned” from the data during model training</td></tr></tbody></table><h3 id="2-4-Zero-Offload"><a href="#2-4-Zero-Offload" class="headerlink" title="2.4 Zero-Offload"></a>2.4 Zero-Offload</h3><p>CPUs are relatively cheap compared to GPUs, so the Zero-Offload idea is to put (offload) certain model states from the training phase into memory as well as CPU computation.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292050673.png"></p><p>Zero-Offload does not want to minimize the memory usage and let the system computational efficiency decline, but if you use the CPU, you also need to consider the communication and computation problems (communication: communication between the GPU and the CPU; computation: too much CPU computation will lead to lower efficiency).</p><p>What Zero-Offload wants to do is to distribute compute nodes and data nodes on GPUs and CPUs, where compute nodes fall on whichever device performs computation, and data nodes fall on whichever device is responsible for storage.</p><h4 id="Zero-Offload-slicing-idea"><a href="#Zero-Offload-slicing-idea" class="headerlink" title="Zero-Offload slicing idea"></a>Zero-Offload slicing idea</h4><p>There are four compute class nodes in the following figure: fwd, bwd, param update and float2half, the first two have roughly O(MB) computational complexity, B is the batch size, and the last two have O(M) computational complexity. In order not to reduce the computational efficiency, the first two nodes are placed on the GPU, and the last two nodes not only have a small computational amount but also need to deal with the Adam state, so they are placed on the CPU, and the Adam state is naturally placed in the memory, and in order to simplify the data graph, the first two nodes are fused into a single node, FWD-BWD Super Node, and the last two nodes are fused into a single node, Update Super Node. Super Node. as shown on the right side of the figure below, slicing along the two edges gradient 16 and parameter 16.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292050107.png"></p><h4 id="Zero-Offload-computation-idea"><a href="#Zero-Offload-computation-idea" class="headerlink" title="Zero-Offload computation idea:"></a>Zero-Offload computation idea:</h4><p>The GPU performs forward and backward computation, transmits the gradient to the CPU for parameter update, and then transmits the updated parameters to the GPU.In order to improve the efficiency, the computation and communication can be parallelized.The GPU, in the back-propagation stage, can wait for the gradient value to fill up the bucket, and then once again compute the new gradient and once again transmit the bucket to the CPU.When the back propagation is finished, the CPU When the backpropagation is finished, the CPU basically already has the latest gradient values, similarly, the CPU also synchronizes the parameters that have been computed to the GPU when the parameters are updated, as shown in the following figure.﻿</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292050452.png"></p><h3 id="2-5-Mixed-precision"><a href="#2-5-Mixed-precision" class="headerlink" title="2.5 Mixed precision:"></a>2.5 Mixed precision:</h3><p>Mixed-precision training is a technique that uses both FP16 (half-precision floating-point number) and FP32 (single-precision floating-point number) precision in the training process. The use of FP16 can greatly reduce the memory footprint, thus allowing for the training of larger scale models. However, due to the lower precision of FP16, problems such as gradient disappearance and model collapse may occur during the training process.</p><p>DeepSpeed supports training with mixed precision, which can be activated by setting in config.json configuration file (“fp16.enabled”:true). During the training process, DeepSpeed will automatically convert part of the operations to FP16 format and dynamically adjust the precision scaling factor as needed to ensure the stability and accuracy of the training.</p><p>When using mixed-precision training, you need to pay attention to some issues, such as Gradient Clipping and Learning Rate Schedule. Gradient Clipping can prevent gradient explosion, and Learning Rate Schedule can help the model converge better.</p><h2 id="III"><a href="#III" class="headerlink" title="III."></a>III.</h2><p>DeepSpeed facilitates the training and fine-tuning of large models with a limited number of machines, and it also has a lot of excellent performance to use, which can be continued to be excavated later.</p><p>Currently the mainstream way of training da models: GPU + PyTorch + Megatron-LM + DeepSpeed</p><p><strong>Advantages</strong></p><ol><li><strong>Storage Efficiency:</strong> DeepSpeed provides a ZERO novel solution to reduce training memory usage, it is different from traditional data parallelism, it partitions the model state and gradient to save a lot of memory;</li><li><strong>Scalability:</strong> DeepSpeed supports efficient data parallelism, model parallelism, pipeline parallelism, and combinations of them, also referred to here as 3D parallelism;</li><li><strong>Ease of use:</strong> In the training phase, only a few lines of code need to be modified to enable pytorch models to use DeepSpeed and Zero.</li></ol>]]></content>
    
    
    <summary type="html">Currently, the development of large models is very hot, and training and fine-tuning of large models are also the focus of attention for various companies. However, the pain point of large model training is that the model parameters are too large, often reaching billions, and it is basically impossible to complete the training with a single GPU alone. Therefore, multiple cards or distributed training are needed to complete this task.</summary>
    
    
    
    <category term="AI" scheme="https://www.nablepart.com/categories/AI/"/>
    
    
    <category term="PyTorch" scheme="https://www.nablepart.com/tags/PyTorch/"/>
    
    <category term="Deep Learning" scheme="https://www.nablepart.com/tags/Deep-Learning/"/>
    
  </entry>
  
  <entry>
    <title>Pass Baldur&#39;s Gate 3 in 10 minutes?</title>
    <link href="https://www.nablepart.com/e1bf072c219d/"/>
    <id>https://www.nablepart.com/e1bf072c219d/</id>
    <published>2023-08-21T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/31/DE7ZbhlRgswYUQM.png" alt="image.png"></p><h2 id="Pass-Baldur’s-Gate-3-in-10-minutes-Fastpass-gamers-have-found-the-evil-way-again"><a href="#Pass-Baldur’s-Gate-3-in-10-minutes-Fastpass-gamers-have-found-the-evil-way-again" class="headerlink" title="Pass Baldur’s Gate 3 in 10 minutes? Fastpass gamers have found the evil way again"></a>Pass Baldur’s Gate 3 in 10 minutes? Fastpass gamers have found the evil way again</h2><p>According to the first week of gameplay data released by Larian Studios on August 11, only 368 players managed to pass Baldur’s Gate 3 in 3 days and 72 hours.</p><p>And a poll of a handful of players on the How Long to Beat website suggests that it will take an average of more than 80 hours to get through Baldur’s Gate 3 in near-perfect fashion.</p><p><img src="https://s2.loli.net/2023/10/31/noapPbgNF9r8UGC.png" alt="image.png"></p><p>The wealth of content in Baldur’s Gate 3 is evident. If you count the time spent pinching faces, reading the files for trial and error and reading the plot text, and even the two-weekend journey that opens up in order to experience the multiple endings, it’s not unusual to hitch hundreds of hours into the process. However, with less than two weeks to go before the game’s official release, most players are probably still in the midst of the first week of the game, with the exception of fastpass players.</p><p>The exception to this is the fast-passers: on August 13, Canadian gamer Mae uploaded a video of his fast-pass, meeting the producer list in just 10 minutes and 52 seconds.</p><p>The following day, he uploaded another video, setting the record at 10 minutes and 03 seconds. You heard right, not 10 hours, but 10 minutes.</p><p><img src="https://s2.loli.net/2023/10/31/bo4Z1UjGwpSB6dI.png" alt="image.png"></p><p>Mae’s speedpass was a bit speculative. He went with the origin character, Gale, because one of Gale’s personal endings comes a little quicker and doesn’t even require beating all three chapters of the game.</p><p><img src="https://s2.loli.net/2023/10/31/95sOtxafI7HmSJc.png" alt="image.png"></p><p>[Note: The following contains mild Baldur’s Gate 3 Chapter 2 spoilers]</p><p>Gale is an unfortunate human mage who has an unstable mana orb implanted in his chest because of his relationship with Mistral, the goddess of magic. The power contained in this mana sphere is capable of destroying an entire city, and it would not be an exaggeration to say that it is a nuclear weapon in a magical worldview.</p><p>And in one of the episodes at the end of Chapter 2, Gale can follow the goddess’s instructions to detonate the mana ball and die with the villain.</p><p><img src="https://s2.loli.net/2023/10/31/lQnBLN7T1zGVZqx.png" alt="image.png"></p><p>Of course, this would certainly not solve the most central problem in the story, and it wouldn’t be a good ending by any stretch of the imagination. But since the game ends immediately after the self-detonation and pops up on the producer’s list, this is indeed an ending.</p><p>Mae, who sees this ending as the ultimate speed-passing goal, just needs to arrive at the end scene of Chapter 2 as soon as possible to rush Gale’s reincarnation. During the 10 minute and 03 second process, Mae doesn’t utilize any bugs and relies solely on her understanding of the game mechanics to get through the game.</p><p>At the character creation screen, Mae points Gale’s strength to 17, boosting the jumping distance to its maximum. With the help of Enhanced Jumping and Featherfall, Gale followed a carefully planned route over the mountains, skipping most of the gameplay elements that would have kept the average player stationed for dozens of hours over the course of the two chapters: scenery, dialog, battles, spinoffs, companions ……</p><p>For plot reasons, the half-elf cleric Shadowheart will be forced to join the party at the end of the first act, and her sheltering spells help Gale avoid several deadly attacks. Those battles that weren’t skipped helped Gale get to level 3, learn the Misty Step that has a teleportation effect, and then use that to escape the crucial boss fight at the end of chapter two.</p><p><img src="https://s2.loli.net/2023/10/31/P8BciIvrwbGCYNZ.png" alt="image.png"></p><h2 id="Misty-Step"><a href="#Misty-Step" class="headerlink" title="Misty Step"></a>Misty Step</h2><p>On the lift before stepping into the last room of Chapter 2, Gail strips naked. This isn’t part of the speedpass strategy, simply because Gail can’t do anything on the lift.</p><p>So, clad only in a pair of underwear, Gale suddenly jumps in front of the villain and without hesitation chooses to blow himself up, accomplishing Mae’s speedpass record while going down in infamy as the most egregious exhibitionist and explosives maniac in the history of the continent of Feren.</p><p><img src="https://s2.loli.net/2023/10/31/JVzKbM498gxGki5.png" alt="image.png"></p><p>On many subsequent attempts, Mae even optimized the process into 9 minutes at one point.</p><p><img src="https://s2.loli.net/2023/10/31/CS2wWHEFtOPBcM3.png" alt="image.png"></p><p>As one of the speedpass pioneers, Mae successfully applied for supertube access to the Baldur’s Gate 3 section of the Speedrun speedpass site. He designed three Speedrun programs, namely “Any%”, “Any% (No Gale)” and “Bear%”.</p><p>The first program is available as soon as you see the end credits, and Mae’s new record is filed here. The second item prohibits the player from using the Gale Self-Explosion speed game, which theoretically requires the player to beat all three chapters, which apparently takes longer.</p><p>The third item presumably refers to “going to Witch Mountain” with the bear, or rather the druid character Halsin, which Mae claims was made possible by the will of the forum players, and “I’m terrified to see what you’ll come up with.”</p><p><img src="https://github.com/zizhuspot/gaming.varygames.com/assets/134364698/1a2ad908-e3df-48f4-914e-8eeebf76bb3a" alt="image"></p><p>Scared as he is, Mae has already embarked on his fourth project. The new projects are called “Romance%” and “Sex%”, a preview of “Bear%”, and they don’t have to be bears, just random NPCs.</p><p>In the August 15 Speedpass attempt, Mae’s self-built character jumped up and down like Gale, winning the heart of Laezel, a female Gith Yankee warrior, in just 7 minutes and 54 seconds. That number was also condensed to 4 minutes and 12 seconds by him over the past two days.</p><p><img src="https://github.com/zizhuspot/gaming.varygames.com/assets/134364698/2dc2e690-9ade-4dce-a195-5617e03dd64c" alt="image"></p><p>The four speedpass projects are currently unchallenged except for Mae, but wait a little while longer, and when more players get through the game and eat up the content and mechanics, the Baldur’s Gate 3 speedpass community will grow even further, contributing even more jaw-dropping projects and records to the table.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Pass Baldur&#39;s Gate 3 in 10 minutes? Fastpass gamers have found the evil way again</summary>
    
    
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    <category term="Gaming Strategy" scheme="https://www.nablepart.com/categories/Gaming-Strategy/"/>
    
    
    <category term="Baldur&#39;s Gate 3" scheme="https://www.nablepart.com/tags/Baldur-s-Gate-3/"/>
    
    <category term="NPC" scheme="https://www.nablepart.com/tags/NPC/"/>
    
    <category term="Gale" scheme="https://www.nablepart.com/tags/Gale/"/>
    
  </entry>
  
  <entry>
    <title>Neverwinter gets Sony&#39;s &quot;Gamer&#39;s Choice Award&quot;</title>
    <link href="https://www.nablepart.com/eedeecb1ffba/"/>
    <id>https://www.nablepart.com/eedeecb1ffba/</id>
    <published>2023-08-19T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/31/3Vxkz9H6tUZwdgy.png" alt="image.png"></p><h2 id="Neverwinter-gets-Sony’s-“Gamer’s-Choice-Award”"><a href="#Neverwinter-gets-Sony’s-“Gamer’s-Choice-Award”" class="headerlink" title="Neverwinter gets Sony’s “Gamer’s Choice Award”"></a>Neverwinter gets Sony’s “Gamer’s Choice Award”</h2><p>On August 17th, Sony PlayStation’s official Twitter account announced that the game that won the Gamer’s Choice Award in July was Neverwinter.</p><p>It should be noted that the selection of this award is based on an official voting campaign at the end of each month, in which all players can vote for the new games released that month according to their personal preferences, and only the work with the highest number of votes will be officially awarded the “Players’ Choice Award”.</p><p>Some familiar masterpieces or topical works have received this award, such as this year’s “Hogwarts Legacy”, last year’s “Eldon’s Law Ring”, “Stray” and so on, which are games with high popularity and popularity among players.</p><p><img src="https://s2.loli.net/2023/10/31/qlt7TrhHVONgWoK.png" alt="image.png"></p><p>This time, “Neverwinter” is more special, as you know, it is not a new release, in the past two years it has been launched on PC and Xbox platforms respectively, and even players who have not been exposed to “Neverwinter”, I believe that they have already heard of this game from all major social media platforms.</p><p>In July this year, in the “eternal disaster” officially from the buyout system to the free operation of the day, the game also finally in the PlayStation platform, and the same period of the topic of new works and “relics of the 2” and VR2 game “Synapse”, as a game has been on the line for more than two years, the “eternal disaster” can still stand out from them, it is really not easy.</p><p>We’ve covered the game and its recent updates many times before, and it’s been recognized for its sincerity, both in terms of how often it’s been updated with new content and the benefits it’s given back to veteran players since going free-to-play.</p><h2 id="For-new-players-who-have-not-yet-been-exposed-to-the-game"><a href="#For-new-players-who-have-not-yet-been-exposed-to-the-game" class="headerlink" title="For new players who have not yet been exposed to the game"></a>For new players who have not yet been exposed to the game</h2><p>For new players who have not yet been exposed to the game, a game’s exposure on the platform and the awards it has achieved are undoubtedly an important factor in influencing them to enter the pit, so in this regard, how is the current performance of “Forever Unchained”?</p><p>First take the highest visibility Steam platform, before July, the game has reached 20 million sales in the world, and basically maintains a stable rate of growth of 10 million per year, has set a new record of domestic buyout sales in recent years. The steady growth of players also shows that even though the game has been online for two years, officials are still maintaining player retention with efficient and quality-assured version updates.</p><p>On July 6, the day the game was announced to be moving to free-to-play, Neverwinter sold another one million copies on Steam, even surging to second place on the daily sales chart:</p><p><img src="https://s2.loli.net/2023/10/31/Emlz4UKFMkDsuT9.png" alt="image.png"></p><p>Part of the reason that these newcomers were willing to buy the game at its original price on the eve of its free-to-play release was, of course, the generous compensation, but whether it’s “compensation” or “free”, it’s certainly an attractive incentive for those who are still uncertain or skeptical about the game’s quality. Or to the game quality is still skeptical of the players a real opportunity to enter the pit, and the official is obviously confident to rely on high-quality game content to retain this part of the newcomers.</p><p>Over the past year, the game has been released on different platforms and consoles, and has been linked to a number of IPs. In June last year, the game was selected for Microsoft’s Xbox Game Pass, and it also landed on the host platform Xbox. It should be noted that the games that join the XGP are selected by Microsoft, and basically they are all works with guaranteed quality and reputation.</p><p>As the first domestic game to join the XGP, this is also a quality recognition of “Forever Robbery” itself, and in the first month of the launch of Xbox, the game’s new players have already exceeded 1 million, and resident in the platform’s best-seller list.</p><p><img src="https://s2.loli.net/2023/10/31/zhxVowg3i2qd4AG.png" alt="image.png"></p><h2 id="The-game’s-arrival-on-PlayStation"><a href="#The-game’s-arrival-on-PlayStation" class="headerlink" title="The game’s arrival on PlayStation"></a>The game’s arrival on PlayStation</h2><p>The game’s arrival on PlayStation, the other major console, in July this year was a “free” addition, but it still stood out from other popular games that month, winning the “Player’s Choice” award, which is voted on entirely by gamers, demonstrating its popularity. “The fact that it won the Player’s Choice Award, which was voted on entirely by players, is a testament to its influence among gamers, as the last Chinese game to win the award was ProtoGod.</p><p>Together with the other big platform Epic, the “Forever Robbery” has also been on the best-seller list for many years, and has received the “most popular game” award selected by the platform. Taken together, this game has achieved good sales results and awards in all platforms and shopping centers, and has almost been recognized by most players on each platform. This is not easy for a product that has been in operation for two years and switched from buyout to free-to-play.</p><p>Therefore, at this point in time, the official government has also taken the opportunity of the “Grand Slam on Four Platforms” to launch more generous player feedback activities.</p><p><img src="https://s2.loli.net/2023/10/31/zrklp4chqV6TexX.png" alt="image.png"></p><h2 id="The-game-has-won-different-awards-on-all-four-platforms-which-is-really-a-“Grand-Slam”"><a href="#The-game-has-won-different-awards-on-all-four-platforms-which-is-really-a-“Grand-Slam”" class="headerlink" title="The game has won different awards on all four platforms, which is really a “Grand Slam”!"></a>The game has won different awards on all four platforms, which is really a “Grand Slam”!</h2><p>In this weekend, players can get 100 hero coins by going online, that is, they can directly get a hero of their choice, and there are experience bonus, secret treasure bonus, PVE rewards and other benefits, which can be said to be the excellent rewards given by the official government once again after the last turn of free feedback, and the conditions for obtaining them are basically zero threshold.</p><p><img src="https://s2.loli.net/2023/10/31/gZP2lTCfQAOtJ7r.png" alt="image.png"></p><h2 id="Officials-defined-this-event-as-“Grand-Slam-Weekend”"><a href="#Officials-defined-this-event-as-“Grand-Slam-Weekend”" class="headerlink" title="Officials defined this event as “Grand Slam Weekend”."></a>Officials defined this event as “Grand Slam Weekend”.</h2><p>The results achieved by “Eternal Robbery” on the platform also show that the game is still hot, and is not “relying on turning free to pull popularity” as some players think, while the game can achieve such results, but also with the production team to listen to players’ voices, improve the game experience and satisfy different demands.</p><p>For example, to address the problem of the game being too hardcore, they added a PVE mode to meet the needs of light players on the one hand, and on the other hand, they designed a functional training mode for those players who are willing to improve but are unable to do so:</p><h2 id="The-training-mode-provides-a-lot-of-useful-specialized-exercises"><a href="#The-training-mode-provides-a-lot-of-useful-specialized-exercises" class="headerlink" title="The training mode provides a lot of useful specialized exercises"></a>The training mode provides a lot of useful specialized exercises</h2><p>For example, in order to deal with the problem of “the proliferation of cheat software and the low cost of hanging”, the production team also cooperated with the public security organs and cracked down on a number of gangs making hangers, and the banning announcements of offending players have always been frequently released on the official blog.</p><p><img src="https://s2.loli.net/2023/10/31/kwI4WG6NHhsULYV.png" alt="image.png"></p><p>Therefore, players have reason to believe that “Forever Robbery Infernal”, which has gained such achievements and honors, will continue to update this game with even better quality and an attitude of valuing players in the third year of its operation cycle, just as it has been officially promised. It is also worthwhile to look at the prospects and quality of this domestic game in the future with more confidence.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">A month after being announced as free, Neverwinter gets Sony&#39;s &quot;Gamer&#39;s Choice Award&quot;</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    
    <category term="Neverwinter" scheme="https://www.nablepart.com/tags/Neverwinter/"/>
    
    <category term="Sony PlayStation" scheme="https://www.nablepart.com/tags/Sony-PlayStation/"/>
    
    <category term="Stray" scheme="https://www.nablepart.com/tags/Stray/"/>
    
    <category term="Eldon&#39;s Law Ring" scheme="https://www.nablepart.com/tags/Eldon-s-Law-Ring/"/>
    
  </entry>
  
  <entry>
    <title>Gray area of MOD rights nearly ruined Red Alert Circle</title>
    <link href="https://www.nablepart.com/4983baba9203/"/>
    <id>https://www.nablepart.com/4983baba9203/</id>
    <published>2023-08-15T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://s2.loli.net/2023/10/31/zClm8UyeYnuNaw4.png" alt="image.png"></p><h2 id="Gray-area-of-MOD-rights-nearly-ruined-Red-Alert-Circle"><a href="#Gray-area-of-MOD-rights-nearly-ruined-Red-Alert-Circle" class="headerlink" title="Gray area of MOD rights nearly ruined Red Alert Circle"></a>Gray area of MOD rights nearly ruined Red Alert Circle</h2><p>Few games are as popular as Command &amp; Conquer: Red Alert 2, which was a huge hit at the time of its release and is still going strong 20 years later.</p><p>Enter the keyword “Red Alert” into any major streaming platform, and the millions of plays are a testament to the game’s popularity, and despite a certain amount of “sentimentality”, Red Alert has really made a comeback in the new era.</p><p><img src="https://s2.loli.net/2023/10/31/nX1RzH3d5Z2TSho.png" alt="image.png"></p><p>There are many reasons for the resurgence of Red Alert’s popularity, perhaps it is the promotion of video creators and anchors, or perhaps it is the perfect civil battle platform that provides a constant flow of gameplay power for players who are passionate about PVP, but in the long run, the key to Red Alert’s longevity is probably the quality of the mods that players have polished and perfected with their own dedication.</p><p>As an old game with excellent support for mods, the efforts of countless mod creators at home and abroad over the past 20 years have revitalized the Red Alert circle, and players can experience brand new maps and gameplay from time to time, and even in the early years, the popularity of the mod “Glory of the Republic” was able to surpass that of the original version, which has become the childhood memories of many players.</p><p>However, the seemingly bright and prosperous Red Alert MOD circle will eventually crack under various factors.</p><p>When we finally realized the importance of MOD rights in the gray area of the law, only to be surprised that the original community environment constructed with love, is so fragile.</p><h2 id="Mind-Over-Matter-is-a-topic-that-can’t-be-avoided"><a href="#Mind-Over-Matter-is-a-topic-that-can’t-be-avoided" class="headerlink" title="Mind Over Matter is a topic that can’t be avoided"></a>Mind Over Matter is a topic that can’t be avoided</h2><p>When it comes to Red Alert mods, Mind Over Matter is a topic that can’t be avoided. Since its release in 2005, the mod has continued to bring players a large number of original units, battle plots, new camps and other new content, after 18 years of iteration, the current “End of Mind” player recognition has been able to be on a par with the original version of “Yuri’s Revenge”, known as “almost perfect Yuri’s Revenge! “.</p><p><img src="https://s2.loli.net/2023/10/31/TMPlwIKurmCntX6.png" alt="image.png"></p><p>Unfortunately, due to the fact that “Mind Over Matter” is a private modding project, the operating costs of the multiplayer server used in the game rely on the production team’s love of power generation and the players’ love of the bounty to support. In addition, the domestic network environment is special, so the PVP server of “End of Mind” will have lagging and dropping problems every now and then, which seriously affects the players’ gaming experience.</p><p><img src="https://s2.loli.net/2023/10/31/bcJPxramhfosL4z.png" alt="image.png"></p><p>Members of the production team also called for the construction of Chinese servers in the official forum.</p><p>As mentioned above, the popularity of Red Alert is closely related to the rise of PVP battle platforms. In the era when the Internet was not well developed, it was not easy for players to call their friends to play together, but today, fully-featured battle platforms can satisfy most of the needs of most players, and online battles have already become an indispensable part of Red Alert’s community.</p><p>As the saying goes, “online games don’t make people violent, network delays do”, in the multiplayer battles, domestic battle platforms set up servers are obviously more reliable than the overseas origin of the “Mind Over Matter”. July 21, the Red Alert circle well-known online battle platform “Rambo play battle platform On July 21st, the well-known Red Alert online battle platform “Rambo Play Battle Platform” came out and announced that it would integrate the “Mind Over Matter” MOD into its own platform, so that players don’t need to build servers by themselves and don’t have to worry about frequent dropouts and lags, they just need to click and download it on the platform, and then they can play it directly.</p><p><img src="https://s2.loli.net/2023/10/31/mxOljoyM3av4RdS.png" alt="image.png"></p><p>In addition, Rambo Games also promised in the update announcement that Mind Over Matter will not be used for profit, and if players are enthusiastic enough, it will invest more resources in the game in the future, such as installing the ladder function or organizing special matchmaking tournaments.</p><p><img src="https://s2.loli.net/2023/10/31/Zd81e3NPbQcxzig.png" alt="image.png"></p><p>From an onlooker’s point of view, Ranbow Play’s approach is impeccable, simplifying the download and installation of mods and providing Chinese players with stable and reliable servers. But before that, Rambo Play deliberately covered up one of the most basic facts - they simply didn’t have the authorization from the production team of Mind Over Matter.</p><p>Originally developed by Polish gamer Speeder, the team has gradually expanded in subsequent updates, and now has more than ten members in the production team. Although the copyright of mods has always been a difficult issue, but “End of Mind” contains a lot of original art materials and scene modeling, and from the point of view of EA’s operation strategy in recent years, the official players have an open and positive attitude towards the self-made mods, “End of Mind” in the sense of reasoning should be duly respected.</p><p><img src="https://s2.loli.net/2023/10/31/u3aNX8e6EdSBJlT.png" alt="image.png"></p><p>The original game codes of Command &amp; Conquer and Red Alert have been open-sourced on GitHub to help players make mods.</p><p>According to the official statement of Rambo Games, they did notify the production team of Mind Over Matter in advance, but the latter did not make any response, and Rambo Games was emboldened to put Mind Over Matter on the market without permission, and I am afraid that this operation is a sign of “silence” as “silent permission”. I’m afraid this is a case of “silence” being taken as “tacit approval”.</p><h2 id="The-fact-that-player-made-mods-are-not-taken-seriously-is-not-news-to-Red-Alert"><a href="#The-fact-that-player-made-mods-are-not-taken-seriously-is-not-news-to-Red-Alert" class="headerlink" title="The fact that player-made mods are not taken seriously is not news to Red Alert."></a>The fact that player-made mods are not taken seriously is not news to Red Alert.</h2><p>As early as “Mind End” before the shelves, Rambo Play has already appeared in front of Red Alert mod players as a negative teaching material. Earlier this year, a group of players led by B station UP master “Aimu Kui” denounced Rambo Play’s platform for belittling MOD copyrights, letting go of the phenomenon of art material piracy, etc., and even launched the map in-purchase payment mechanism, seriously damaging the mod community! They even introduced a payment mechanism for in-map purchases, which seriously undermined the principle and atmosphere of mutual benefit, mutual assistance, and openness in the modding community.</p><p><img src="https://s2.loli.net/2023/10/31/ACNMqRJdoIu3psO.png" alt="image.png"></p><p>Rambo Play had previously put its name on the Red Alert 2 production staff list.</p><p>For this reason, on June 30th, Lamborghini published an announcement called “Respect and protect original intellectual property rights”, pointing out the bad behavior of stealing pictures and materials in the Red Alert circle, and expressing its willingness to assist the original creators in defending their rights and jointly promoting the innovative development of Red Alert. However, less than a month later, this announcement seems to have lost its effectiveness.</p><p>Shortly after Mind Over Matter hit Rambo Play’s platforms, players picked up a lot of bootlegs through repeated comparisons, such as modifying the in-game billboards originally used to advertise CNCNET, the officially designated online platform, into Rambo Play’s own advertisements:</p><p>Others, after exhaustive comparative testing, confirmed Rambo Play’s file tampering with the Mind Over Matter mod, including but not limited to altering encounter loading maps, tweaking game values, and more.</p><p><img src="https://s2.loli.net/2023/10/31/fUVaAkimOcdBLq5.png" alt="image.png"></p><p>Replacing billboards may be seen as a commercial consideration, but privately modifying game files seems like a thankless and useless endeavor.</p><p>The prevailing view among players is that by modifying the files, they have thus realized an exclusive version of Mind Over Matter for the Rambo Play platform, and even if players choose to delete the platform one day and use other platforms to play against each other, they will still encounter the problem of incompatible files and thus not be able to open the game properly.</p><p>Interestingly, in the process, some players submitted relevant information to the well-known antivirus software, Fire Down, and Fire Down officials have basically confirmed Rambo Play’s unauthorized tampering with local files.</p><p>The evidence was overwhelming, there was no way to defend themselves, and at this point, Rambo Play seemed to have lost the motivation to prove themselves, and a week later, Rambo Play took the initiative to pull down Mind Over Matter and offered their killer app, “Due to the existence of bad value orientation as well as the suspicion of discrediting the country’s image, we decided to take down the Mind Over Matter mod.”</p><p>The Red Alert franchise’s flirtations with and spoofs of the Soviet Union and the socialist camp are objective, but that doesn’t mean that Mind Over Matter has to take the blame.</p><p>According to incomplete statistics, more than half of the members of the current “Mind Over Matter” production team for the Chinese, the MOD is also in China by countless fans of the series of love and pursuit, thousands of players have personally examined all kinds of content in the game, Rambo play this operation more or less “their own shortcomings but also to the teacher to play the report” meaning.</p><p>In any case, the basic demand of the players to force Rambo to take down Mind Over Matter has been successfully achieved, at least in terms of the result, we have achieved a stage victory - but this hard-won victory is really only a “stage”.</p><h2 id="Rambo-took-down-Mind-Over-Matter"><a href="#Rambo-took-down-Mind-Over-Matter" class="headerlink" title="Rambo took down Mind Over Matter"></a>Rambo took down Mind Over Matter</h2><p>At the same time that Rambo took down Mind Over Matter, several Red Alert gamers collectively reported that they had been boxed out of the game, and were bombarded with text messages and phone calls throughout the day;</p><p>At the same time, CNCNET, the only designated official online platform for Mind Over Matter, suffered a DDoS attack from an unknown source, with a large number of players reporting that the servers they normally use went on hiatus for a whole day, which is clearly not a server pumping phenomenon that occurs in the daily life of the CNCNET platform.</p><p>We don’t know if this mess was related to the Mind Over Matters incident, but a week later, Bamboo Dragonfly, a well-known modder in the Red Alert community, posted a message in the forums claiming that he had received a large number of harassing phone calls and threatening text messages, and announced that he was retiring from the game, and would not want to have anything to do with Red Alert in the future.</p><p>As for the production team of “Mind Over Matter”, which is in the center of public opinion, they have also decided to temporarily postpone the development of the subsequent version. No matter how you look at it, the innocent Red Alert players have lost again and again, and they have simply lost their minds.</p><p>The Mind Over Matter incident is just a microcosm of the Red Alert mod scene, and in a world where people suck more than each other, what happened to Mind Over Matter isn’t an isolated case, and it’s obviously not the worst one.</p><p>There are countless small, obscure mods that are huddled under the protection of platforms while being subjected to what is known as “intellectual property protection”, and even if there were no Rambo, there are still other platforms that have been the main protagonists of such incidents, such as Red Alert 2, another well-known online battling platform in the Red Alert community, and Battle.net Battle.net. Battle.net Battle Platform.</p><p><img src="https://s2.loli.net/2023/10/31/usWCkhnA2po8TvU.png" alt="image.png"></p><p>Battle.net has also released mods without the authorization of the Mind Over Matter team.</p><p>According to EA’s attitude towards Red Alert mods, as long as they don’t constitute a profit, they are more than welcome to create all kinds of mods for Red Alert.</p><p>However, the reality is that online platforms use mods to guide users to consume, and illegal behaviors such as stealing pictures and materials are endless, even Rambo, which has repeatedly provoked the anger of players, only shelves the mods without any substantial losses.</p><p><img src="https://s2.loli.net/2023/10/31/hRaQXoWwkEjcFG3.png" alt="image.png"></p><p>Rambo Play changed its name to Rambo Matchmaking Platform in June this year and operated as usual.</p><p>The revival of Red Alert after 20 years of existence is inseparable from the love of players for decades, which is both great and powerless. In the lack of official operation and maintenance of China’s Red Alert circle, there are a lot of problems related to MOD copyright disputes, and very few legal means to protect their rights, and it seems that this environment will continue for a long, long time.</p><p><img src="https://s2.loli.net/2023/10/31/GdRVvyo5CtnDJ82.png" alt="image.png"></p>]]></content>
    
    
    <summary type="html">Few games are as popular as Command &amp; Conquer:Red Alert 2, which was a huge hit at the time of its release and is still going strong 20 years later.</summary>
    
    
    
    <category term="Game News" scheme="https://www.nablepart.com/categories/Game-News/"/>
    
    <category term="Game Research Associates" scheme="https://www.nablepart.com/categories/Game-Research-Associates/"/>
    
    
    <category term="MOD" scheme="https://www.nablepart.com/tags/MOD/"/>
    
    <category term="Command &amp; Conquer:Red Alert 2" scheme="https://www.nablepart.com/tags/Command-Conquer-Red-Alert-2/"/>
    
    <category term="Red Alert" scheme="https://www.nablepart.com/tags/Red-Alert/"/>
    
    <category term="Red Alert mods" scheme="https://www.nablepart.com/tags/Red-Alert-mods/"/>
    
    <category term="Mind Over Matter" scheme="https://www.nablepart.com/tags/Mind-Over-Matter/"/>
    
  </entry>
  
  <entry>
    <title>The Rise of DeFi:Synergies with Cryptocurrency Exchange Application Development</title>
    <link href="https://www.nablepart.com/8c2dd8b9a64e/"/>
    <id>https://www.nablepart.com/8c2dd8b9a64e/</id>
    <published>2023-08-02T10:02:08.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>The financial landscape has been revolutionized in recent years, driven by the emergence of decentralized finance, commonly known as DeFi. This groundbreaking paradigm shift has redefined the way we perceive and engage with traditional financial services. deFi leverages blockchain technology to create an open, trust-less, permission-less ecosystem where users can access a wide range of financial instruments without an intermediary.</p><p>Meanwhile, the proliferation of cryptocurrency trading platforms has played a key role in shaping the broader cryptocurrency landscape. These platforms serve as portals for millions of users to buy, sell, and trade digital assets. However, the synergy between DeFi and cryptocurrency trading app development has the potential to amplify the impact of both, ushering in a new era of financial inclusion and innovation.</p><p>Cryptocurrency Exchange Application Development</p><p>Ultimately, this paper aims to provide a comprehensive understanding of the interplay between DeFi and cryptocurrency exchange application development, providing insights into the potential benefits, challenges, and future prospects that this convergence brings to the global financial ecosystem. These two pillars of the blockchain industry have the ability to redefine finance, democratize financial services, and pave the way for a more inclusive and decentralized future.</p><h2 id="What-is-Cryptocurrency-Exchange-Application-Development"><a href="#What-is-Cryptocurrency-Exchange-Application-Development" class="headerlink" title="What is Cryptocurrency Exchange Application Development?"></a>What is Cryptocurrency Exchange Application Development?</h2><p>Cryptocurrency exchange application development is the process of creating software applications that facilitate the buying, selling and trading of cryptocurrencies. These applications act as digital platforms where users can interact with various digital assets and execute transactions in a secure and user-friendly manner. The development of cryptocurrency trading apps involves multiple stages, including designing the user interface, implementing robust security features, integrating blockchain technology, and ensuring seamless functionality across different devices and operating systems.</p><p>Developers must also consider factors such as liquidity management, order matching algorithms, and compliance with regulatory standards. The goal of cryptocurrency trading app development is to provide a reliable and intuitive platform that enables users to confidently navigate the cryptocurrency market while also contributing to the broader decentralized financial ecosystem. This process plays a crucial role in democratizing digital assets and driving innovation in the rapidly evolving world of blockchain and cryptocurrencies.</p><h2 id="Synergy-with-cryptocurrency-exchange-application-development"><a href="#Synergy-with-cryptocurrency-exchange-application-development" class="headerlink" title="Synergy with cryptocurrency exchange application development"></a>Synergy with cryptocurrency exchange application development</h2><p>Synergy with cryptocurrency trading app development refers to the mutually beneficial relationship between the development of cryptocurrency trading platform apps and other elements of the broader blockchain and fintech ecosystem. This synergy involves leveraging the functionality of cryptocurrency trading applications to enhance and support aspects of the cryptocurrency and decentralized finance (DeFi) space.</p><p>It encompasses a range of interactions such as integrating the DeFi protocol into trading platforms, optimizing the user experience, implementing advanced security features, and exploring innovative financial tools. The partnership aims to create a seamless and efficient environment for users to participate in digital assets while promoting the growth and development of the cryptocurrency market. By combining the strengths of cryptocurrency trading application development and other elements of the blockchain industry, this synergy contributes to the overall advancement and mainstream adoption of cryptocurrencies and decentralized financial services.</p><h2 id="Benefits-of-Combining-DeFi-and-Exchange-Development"><a href="#Benefits-of-Combining-DeFi-and-Exchange-Development" class="headerlink" title="Benefits of Combining DeFi and Exchange Development"></a>Benefits of Combining DeFi and Exchange Development</h2><p>Below are the benefits of combining DeFi and exchange development:<br>➢Enhanced liquidity: the integration of the DeFi protocol with exchanges increases liquidity, enabling a smoother trading experience and reducing slippage.<br>➢Diversified financial products: Users have access to a wider range of financial instruments, including lending and borrowing, liquidity mining, and more, in a familiar trading interface.<br>➢Improved Security Measures: The merger allows for advanced security practices that protect assets and reduce the risk of hacking or vulnerability exploitation.<br>➢Interoperability: Users can seamlessly transfer assets between different DeFi platforms and trading networks, fostering a more connected financial ecosystem.<br>Reduced Fees and Faster Transactions: Integration with DeFi enhances the overall user experience by reducing transaction fees and speeding up processing times.<br>➢Reduces risk through diversification: Users can diversify their portfolios by easily transferring assets between DeFi and centralized exchanges to spread risk across platforms.<br>➢Includes decentralized identity solutions: Combining DeFi with an exchange allows for the integration of decentralized identity solutions, giving users greater control over their personal information.<br>➢Innovative financial products: DeFi features such as derivatives, synthetic assets, and more can be seamlessly integrated into exchanges, providing users with more sophisticated investment options.<br>➢Regulatory compliance: The combination of DeFi and exchanges can develop solutions that address regulatory issues and ensure compliance with the evolving legal framework.<br>➢Improved user experience: A unified platform provides a seamless experience for users, simplifying the process of getting started and making DeFi more accessible to a wider audience</p><h2 id="Future-Trends-in-DeFi-and-Cryptocurrency-Exchange-App-Development"><a href="#Future-Trends-in-DeFi-and-Cryptocurrency-Exchange-App-Development" class="headerlink" title="Future Trends in DeFi and Cryptocurrency Exchange App Development"></a>Future Trends in DeFi and Cryptocurrency Exchange App Development</h2><p>Future trends in DeFi and cryptocurrency exchange application development:<br>◾ Interoperability and cross-chain solutions: the future will see a proliferation of interoperable solutions that allow different blockchain networks to communicate and share data. This will enable seamless asset transfers between different DeFi platforms and cryptocurrency exchanges.<br>◾ Layer 2 Scaling Solutions: As blockchain networks continue to face scalability challenges, Layer 2 solutions such as Optimistic Rollups and zk-Rollups will be emphasized. They enhance the user experience by providing faster transaction processing times and reduced overhead.<br>◾ Advanced Security Measures: As the complexity of the DeFi protocol continues to grow, security will remain a primary concern. More sophisticated security measures such as formal authentication and advanced encryption will be implemented to protect assets.<br>◾ DeFi Derivatives and Synthetic Assets: The development of more advanced financial products, such as derivatives and synthetic assets, will enable users to hedge risk and create more diversified investment strategies in the DeFi space.<br>◾ Regulatory Compliance Solutions: As DeFi continues to evolve, regulators will pay more attention. Regulatory compliance solutions such as identity verification and transaction monitoring will emerge to balance innovation with compliance.<br>◾ NFT Integration: Irreplaceable tokens (NFTs) will find more and more use cases in the DeFi ecosystem, from loan collateral to fractional ownership of high-value assets.<br>User-Centered Interfaces: User experience will be the focus of developers. Intuitive, user-friendly interfaces will be prioritized to appeal to a wider audience and make DeFi and cryptocurrency trading platforms more accessible.<br>Decentralized Identity Solutions: secure and private identity management on the blockchain will become increasingly important, allowing users to maintain control of their personal information while participating in various financial activities.<br>◾ Artificial Intelligence and Machine Learning Integration: these technologies will be used for data analytics, fraud detection and risk assessment within DeFi and cryptocurrency trading platforms.<br>◾ Green and sustainable finance: with growing environmental concerns, the development of eco-friendly blockchain solutions and sustainable financial options will become a prominent trend.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion:"></a>Conclusion:</h2><p>The rise of DeFi and its synergy with the development of cryptocurrency trading applications represents a watershed in financial development. Together, they have blazed a trail towards a more inclusive, efficient and accessible financial ecosystem.</p><p>Through this exploration, we have witnessed how the DeFi protocol and cryptocurrency exchanges are not only parallel tracks, but also interconnected forces that can amplify each other’s potential. The pools of liquidity, smart contract integrations and innovative financial instruments generated by this alliance have the potential to reshape the way we trade, invest and participate in the global economy.</p><p>However, it is important to recognize that this convergence is not without challenges. Regulatory considerations, security concerns and scalability issues remain significant hurdles that must be addressed for this partnership to be sustainable. The industry must continue to work with regulators to ensure compliance while advocating for an environment that fosters innovation.</p>]]></content>
    
    
    <summary type="html">The proliferation of cryptocurrency trading platforms has played a key role in shaping the broader cryptocurrency landscape</summary>
    
    
    
    <category term="Cryptocurrency" scheme="https://www.nablepart.com/categories/Cryptocurrency/"/>
    
    
    <category term="cryptocurrency" scheme="https://www.nablepart.com/tags/cryptocurrency/"/>
    
    <category term="Defi" scheme="https://www.nablepart.com/tags/Defi/"/>
    
  </entry>
  
  <entry>
    <title>优化生产力：最近使用的AIGC工具</title>
    <link href="https://www.nablepart.com/f5d58b0539a0/"/>
    <id>https://www.nablepart.com/f5d58b0539a0/</id>
    <published>2023-07-25T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>👨‍🦽优化生产力：最近使用的AIGC工具</p><p><img src="https://cdn-images-1.medium.com/max/7170/1*NYx7ELf_T25H4sTbKq3zCQ.jpeg" alt="图片"></p><p>作为一名前端开发者，在这个人工智能浪潮中，除了玩各种ChatGPT和MidJourney工具，我还升级了几款我多年来一直在使用的工具，使它们成为了人工智能工具。</p><p>以下是给我留下最深刻印象的几款人工智能工具：</p><h2 id="正则表达式处理"><a href="#正则表达式处理" class="headerlink" title="正则表达式处理"></a>正则表达式处理</h2><p>在编写代码时，我发现最麻烦的部分就是写正则表达式。不是说写正则表达式很困难，而是它需要一种不同的思维方式，每次写正则表达式都会打断整个编写代码的过程。</p><p>对于复杂的正则表达式，理解和修改的成本更高。虽然有一些可视化的正则表达式工具可以帮助增强理解，但整个正则表达式的维护成本仍然很高。</p><p>目前，我正在使用人工智能工具改进整个编写正则表达式的过程。对于简单的正则表达式，我使用ChatGPT生成正则表达式，只需告诉它正则表达式需要做什么。</p><p>对于复杂的正则表达式，我依赖一些智能生成工具来帮助我。</p><p><img src="https://cdn-images-1.medium.com/max/3184/1*xqs-McWArMTiCXKJNddeFg.png" alt="正则表达式演示"></p><p>我见过一些更智能的正则表达式工具：<a href="https://regex.ai/">https://regex.ai/</a>。输入一段文本，告诉人工智能需要提取哪些内容，人工智能就会自动生成一个正则表达式。</p><p>但这里存在一个问题，使用这类工具生成的正则表达式容易过拟合。</p><p>对我来说，使用ChatGPT生成正则表达式已经足够了，我不禁对这种对话式的方式感到惊叹。</p><h2 id="智能终端工具"><a href="#智能终端工具" class="headerlink" title="智能终端工具"></a>智能终端工具</h2><p><a href="http://app.warp.dev/referral/8G4GXN"><strong>你的Warp邀请-你新的最爱终端</strong></a></p><p><img src="https://cdn-images-1.medium.com/max/2000/1*F3JAAPM1zhuSPTY3IdEJtA.png" alt="Warp人工智能"></p><p>最近，我用多年的iTerm替换了智能终端工具Warp。只要你使用它十分钟，你就能轻松看出我为什么坚定地放弃了iTerm。</p><p>Warp基于Rust，拥有轻量级的软件界面和非常快速的响应速度。几乎不需要配置，采用对话式生成的命令。</p><p>此外，内置的编辑器对于编辑命令非常高效。</p><h2 id="代码辅助与代码审查工具"><a href="#代码辅助与代码审查工具" class="headerlink" title="代码辅助与代码审查工具"></a>代码辅助与代码审查工具</h2><p><a href="https://github.com/sturdy-dev/codeball-action">https://github.com/sturdy-dev/codeball-action</a></p><p>在GitHub上配置相应的智能CR工具可以对提交的Pull Request代码进行基本的代码审查和重写。它避免了一些基本的代码问题，并通过提交PR直接修改代码。</p><p>如果你不知道如何编写Commit Message，你可以在这个时候使用人工智能来做一些简单的工作，并根据现有的代码生成清晰的Commit注释。<br><a href="https://github.com/Nutlope/aicommits"><strong>GitHub - Nutlope&#x2F;aicommits: A CLI that writes your git commit messages for you with AI</strong></a></p><p><img src="https://cdn-images-1.medium.com/max/4380/1*u7m1-9HLLEVMmekBHVgXqg.png" alt="图片"></p><p>还有最近流行的编辑器Cursor，它已经集成了ChatGPT 4，虽然还不足以取代VS Code，但对于查看代码和寻求代码优化建议已经非常惊人了。</p><h2 id="写作和阅读"><a href="#写作和阅读" class="headerlink" title="写作和阅读"></a>写作和阅读</h2><p>自从使用Notion AI以后，我的写作和阅读方式彻底改变了。</p><p>Notion AI是Notion的一个功能，它可以根据用户输入的文本内容智能生成相关摘要并提取关键信息。它可以帮助用户更快地理解一篇文章或文档的核心内容，提高阅读效率。此外，Notion AI还可以帮助用户生成相关的文档大纲，使用户更容易组织文章结构。</p><p>人工智能可能不一定改变人们的学习方法，但它可以改变阅读方法，并在一定程度上优化阅读英文内容的门槛。</p><ul><li>使用人工智能总结文章</li><li>将相应的文章和书籍翻译成简洁的语言</li><li>通过提问进行总结和学习</li><li>让人工智能创建一些问题，测试你对文章的理解。</li></ul><p>总的来说，使用人工智能工具🧠可以提高工作效率，让我们更专注于核心工作。与其理解和观察人工智能的巨大变革，不如亲身使用和体验这种直接的生产力变革。</p>]]></content>
    
    
    <summary type="html">优化生产力：最近使用的AIGC工具，包括使用AI生成正则表达式、智能终端工具、代码辅助与审查工具、Notion AI等。了解如何利用这些工具提高工作效率和阅读效果。</summary>
    
    
    
    
    <category term="优化生产力" scheme="https://www.nablepart.com/tags/%E4%BC%98%E5%8C%96%E7%94%9F%E4%BA%A7%E5%8A%9B/"/>
    
    <category term="AIGC工具" scheme="https://www.nablepart.com/tags/AIGC%E5%B7%A5%E5%85%B7/"/>
    
    <category term="AI生成正则表达式" scheme="https://www.nablepart.com/tags/AI%E7%94%9F%E6%88%90%E6%AD%A3%E5%88%99%E8%A1%A8%E8%BE%BE%E5%BC%8F/"/>
    
    <category term="智能终端工具" scheme="https://www.nablepart.com/tags/%E6%99%BA%E8%83%BD%E7%BB%88%E7%AB%AF%E5%B7%A5%E5%85%B7/"/>
    
    <category term="代码辅助" scheme="https://www.nablepart.com/tags/%E4%BB%A3%E7%A0%81%E8%BE%85%E5%8A%A9/"/>
    
    <category term="代码审查工具" scheme="https://www.nablepart.com/tags/%E4%BB%A3%E7%A0%81%E5%AE%A1%E6%9F%A5%E5%B7%A5%E5%85%B7/"/>
    
    <category term="Notion AI" scheme="https://www.nablepart.com/tags/Notion-AI/"/>
    
    <category term="工作效率" scheme="https://www.nablepart.com/tags/%E5%B7%A5%E4%BD%9C%E6%95%88%E7%8E%87/"/>
    
    <category term="阅读效果" scheme="https://www.nablepart.com/tags/%E9%98%85%E8%AF%BB%E6%95%88%E6%9E%9C/"/>
    
  </entry>
  
  <entry>
    <title>L’ AIGC对政府的五项要求</title>
    <link href="https://www.nablepart.com/1e740989b4bd/"/>
    <id>https://www.nablepart.com/1e740989b4bd/</id>
    <published>2023-07-24T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<h2 id="L’-AIGC对政府的五项要求"><a href="#L’-AIGC对政府的五项要求" class="headerlink" title="L’ AIGC对政府的五项要求"></a>L’ AIGC对政府的五项要求</h2><h3 id="IAGC成员达成的五个共识"><a href="#IAGC成员达成的五个共识" class="headerlink" title="IAGC成员达成的五个共识"></a>IAGC成员达成的五个共识</h3><h3 id="5个共同点"><a href="#5个共同点" class="headerlink" title="5个共同点"></a>5个共同点</h3><ul><li><p>所有国家达成2050年实现碳中和的协议。</p></li><li><p>每个国家立即采取措施，迅速减少温室气体排放。</p></li><li><p>政府撤资化石能源行业。</p></li><li><p>终止对化石燃料生产和使用的补贴。</p></li><li><p>对化石燃料征税，以反映其对社会和环境的负面影响。</p></li></ul><h2 id="IAGC成员达成的五个共识-1"><a href="#IAGC成员达成的五个共识-1" class="headerlink" title="IAGC成员达成的五个共识"></a>IAGC成员达成的五个共识</h2><ul><li><p>所有国家达成碳中和状态的协议</p></li><li><p>立即采取措施，以在短期内减少国内排放</p></li><li><p>政府撤资化石能源产业</p></li><li><p>终止对化石燃料生产和使用的所有补贴</p></li><li><p>对化石燃料征税，以反映其对社会和环境的实际负担</p></li></ul>]]></content>
    
    
    <summary type="html">L&#39;AIGC提出对政府的五个要求，包括达成全球碳中和协议、减少温室气体排放、撤资化石能源产业、终止补贴以及对化石燃料征税等。这些举措旨在实现环境保护和可持续发展。</summary>
    
    
    
    
    <category term="L&#39;AIGC" scheme="https://www.nablepart.com/tags/L-AIGC/"/>
    
    <category term="全球碳中和" scheme="https://www.nablepart.com/tags/%E5%85%A8%E7%90%83%E7%A2%B3%E4%B8%AD%E5%92%8C/"/>
    
    <category term="温室气体排放" scheme="https://www.nablepart.com/tags/%E6%B8%A9%E5%AE%A4%E6%B0%94%E4%BD%93%E6%8E%92%E6%94%BE/"/>
    
    <category term="撤资化石能源" scheme="https://www.nablepart.com/tags/%E6%92%A4%E8%B5%84%E5%8C%96%E7%9F%B3%E8%83%BD%E6%BA%90/"/>
    
    <category term="终止补贴" scheme="https://www.nablepart.com/tags/%E7%BB%88%E6%AD%A2%E8%A1%A5%E8%B4%B4/"/>
    
    <category term="化石燃料征税" scheme="https://www.nablepart.com/tags/%E5%8C%96%E7%9F%B3%E7%87%83%E6%96%99%E5%BE%81%E7%A8%8E/"/>
    
    <category term="环境保护" scheme="https://www.nablepart.com/tags/%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4/"/>
    
    <category term="可持续发展" scheme="https://www.nablepart.com/tags/%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95/"/>
    
  </entry>
  
  <entry>
    <title>CARV Research：生成型人工智能是否将成为游戏行业的主导力量？</title>
    <link href="https://www.nablepart.com/6d967575e570/"/>
    <id>https://www.nablepart.com/6d967575e570/</id>
    <published>2023-07-23T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="CARV-Research：生成型人工智能是否将成为游戏行业的主导力量？"><a href="#CARV-Research：生成型人工智能是否将成为游戏行业的主导力量？" class="headerlink" title="CARV Research：生成型人工智能是否将成为游戏行业的主导力量？"></a>CARV Research：生成型人工智能是否将成为游戏行业的主导力量？</h2><p><img src="https://cdn-images-1.medium.com/max/7200/1*BUAPv9nb1roOngdej0OAtA.png"></p><p>考虑到目前事态的现状，上述标题似乎是一个夸大的说法。然而，根据我们最近参与了CARV关于AIGC和游戏的专题讨论，我们打算更深入地探讨这个话题并研究其当前的进展情况。</p><p>本文包含了各种行业见解。我们邀请您阅读这些文献，并从中获取新的知识！</p><h2 id="游戏行业中的长期AIGC应用"><a href="#游戏行业中的长期AIGC应用" class="headerlink" title="游戏行业中的长期AIGC应用"></a>游戏行业中的长期AIGC应用</h2><p>即使在最近几个月的惊人爆炸之前，游戏市场中的生成型人工智能在近年来已经实现了大幅增长，2022年的全球市场规模为922亿美元。Market.us估计，到2032年，该市场将以23.3％的复合年增长率（CAGR）加速，并产生7105亿美元的增量收入。</p><p>预计采用基于AI生成的资产将继续推动市场增长，因为开发人员越来越倾向于使用AI生成的资产来提高游戏的整体质量和体验。</p><p>我们可以看到像Ludo和Minecraft这样的主要公司采用了生成型人工智能技术，旨在为玩家提供新鲜而独特的内容。更近期的例子是Unity Software，该公司计划于2023年3月推出生成型人工智能市场，以及Latitude.io在2023年3月推出的基于AI21开发的新版Wyvern模型2.0，该模型基于全新、更聪明的基础模型。</p><p>生成型人工智能显然是一个强大的工具，比之前的任何等效工具都更加强大，它可以创建新的游戏资产，如3D模型和精灵，并根据特定的风格或类型生成独特的新资产，从现有的游戏资产中学习。</p><p>AIGC工具在角色建模、动画和地形创建方面的使用逐渐增多，开发人员使用AIGC生成逼真的景观、角色和物体。AI工具具备使开发人员能够创建更复杂和动态的游戏机制，并优化游戏性能的能力。它还可以用于游戏内测试，提供更准确和高效的测试方法。</p><p>到目前为止，AIGC在工作流程中的应用是逐渐的，游戏开发人员正在慢慢将AIGC集成到通常外包给外部劳动力的内容创作和编码流程中。</p><p>从长远来看，人们普遍认为，AI将作为人类创作者的副驾驶员，使现有的工作流程更强大和更易访问，直到其引发根本和不可避免的产业变革。</p><h2 id="AI驱动的UGC：每个人都可以制作游戏"><a href="#AI驱动的UGC：每个人都可以制作游戏" class="headerlink" title="AI驱动的UGC：每个人都可以制作游戏"></a>AI驱动的UGC：每个人都可以制作游戏</h2><p>当游戏公司正在开发内部作品时，AI驱动的生成工具的出现为游戏行业的用户生成内容（UGC）打开了一扇机遇之门。</p><p>其中一个重要的新应用是协作创作工具，可以让创作者使用文本、语音或图像提示生成资产。例如，ControlNet for Stable Diffusion和AI Dungeon可以让创作者共同创作传说、世界构建、故事情节、任务甚至是完整的分支式视觉小说游戏。GPT-4已经被用于自动生成像贪吃蛇这样简单的游戏。</p><p>从头开始构建的基于生成型人工智能的创作引擎可能会实现新的创作范式和用户体验，具备自定义渲染能力，或者使用专门用于生成用户生成资产的编程语言构建。</p><p>生成型人工智能还将使社区创作者能够更加灵活地制定游戏中的内容生成规则。例如，Role和Riftweaver正在尝试让桌面游戏的主持人利用生成型人工智能的力量，在游戏中将玩家置于定制环境中，与具有自定义统计数据、传说和能力的新怪物战斗。</p><p>基于此，Web3游戏和Web3用户有更广阔的空间来释放他们的想象力。</p><h2 id="AI驱动的Web3游戏：使公司和社区受益"><a href="#AI驱动的Web3游戏：使公司和社区受益" class="headerlink" title="AI驱动的Web3游戏：使公司和社区受益"></a>AI驱动的Web3游戏：使公司和社区受益</h2><p>在AI转型的背景下，Web3游戏与主流游戏非常相似，因为它们从集成AI工具中受益。这些工具包括简化的资产生成、优化的工作流程、降低的成本和其他方便的特性，前面已经提到过。</p><p>同时，分散化技术正在为AI驱动的游戏领域带来活力。就像我们看到创作者开始生成视频而不使用传统的动画软件和渲染流程一样，新的分散化资产和数据主权很可能改变我们今天所熟悉的游戏格局。</p><p>一方面，Web3可以进一步优化创作者与平台之间的收入分配。在Roblox上，创作者带走的收入不到他们产生的30%（Fortnite Creative用户约为10%）。Web3经济将创作工具与生成型人工智能结合起来，这些工具可能成本更低，更易于维护和升级，并将以更分散的方式让新平台更好地支付创作者。</p><p>另一方面，Web3创造了独特的用户体验，并且是一个更强大的产品推出工具，特别是在通过AI工具加持时。我们经常看到新颖案例的迅速增长。研究这些案例可能是抓住技术时代精神的最佳方式。</p><h3 id="案例：AI赋能的大IP"><a href="#案例：AI赋能的大IP" class="headerlink" title="案例：AI赋能的大IP"></a>案例：AI赋能的大IP</h3><p>FIFA，没错，就是那个FIFA，刚刚宣布了移动游戏<a href="https://fifaworldcupaileague.com/">FIFA World Cup AI League</a>，旨在提升球迷参与度，并为玩家与他们最喜欢的足球俱乐部互动提供新的方式。它还将整合区块链技术，允许玩家拥有游戏内资产并参与数字经济。</p><h3 id="案例：由AI倡议驱动的新游戏"><a href="#案例：由AI倡议驱动的新游戏" class="headerlink" title="案例：由AI倡议驱动的新游戏"></a>案例：由AI倡议驱动的新游戏</h3><p><a href="https://plailabs.com/">PLAI Labs</a>专注于构建利用AI和Web3的下一代社交平台。他们的首个作品是<a href="https://www.champions.io/">Champions Ascension</a>，一个平台上的MMORPG游戏，玩家可以导入他们现有的NFT角色，展开任务，交易物品，在角斗场进行战斗，创建自己独特的地下城等等。他们还正在构建一个AI协议平台，将帮助实现从UGC到匹配到2D和3D资产渲染的各种功能。</p><h3 id="案例：AI驱动的平台"><a href="#案例：AI驱动的平台" class="headerlink" title="案例：AI驱动的平台"></a>案例：AI驱动的平台</h3><p><a href="https://launchpad.seedify.fund/?utm_source=Cointelegraph&utm_medium=&utm_campaign=Shockwaves">Seedify</a>推出了一个名为<a href="https://www.shockwaves.ai/">Shockwaves</a>的新平台，旨在通过将NFT与AI技术结合，为用户提供独特的游戏体验，实现复杂的游戏玩法和智能决策。</p><h3 id="案例：AI驱动的UGC-NFT"><a href="#案例：AI驱动的UGC-NFT" class="headerlink" title="案例：AI驱动的UGC NFT"></a>案例：AI驱动的UGC NFT</h3><p>Web3的核心在于其社区。AI驱动的UGC NFT可能会使“传统NFT”变得过时。由<a href="https://www.ultiverse.io/">Ultiverse</a> SDK提供支持，<a href="https://www.metamerge.xyz/">Meta Merge</a>允许用户通过生成型人工智能内容铸造独特的宠物NFT。它包括宠物的独特行为、个性和对玩家风格的适应，为个性化体验提供支持。</p><p>Ultiverse平台将在未来推出更多结合AIGC的游戏，希望利用AI技术为所有玩家打造独特的游戏体验，特别是在塑造角色、发展故事情节和创建地图方面。</p><p>生成型人工智能的出现代表着游戏行业的重大转变，使游戏创作过程民主化，并让社区在塑造他们所玩的游戏中扮演更积极的角色。CARV预计生成型人工智能将在Web2和Web3世界中释放出创造力、创新和玩家赋权的新水平。</p><p><strong>关于CARV</strong></p><p>所有你的游戏时刻，尽在一个地方。CARV正在构建一个以游戏为重点的ID基础设施，为玩家提供成就展示、语义社交和游戏特权的机会。</p><p><a href="https://bit.ly/3GmBJyl">网站</a> | <a href="https://bit.ly/3WQ5hvl">文档</a> | <a href="https://twitter.com/carv_official">Twitter</a> | <a href="https://medium.com/@carv">Medium</a> | <a href="https://discord.gg/5btvsrZBkH">Discord</a> | <a href="https://t.me/carv_official_global">Telegram</a> | <a href="https://newsletter.carv.io/?utm_campaign=newsletter&utm_medium=email&utm_source=Revue+newsletter">电子报</a> | <a href="https://link3.to/carvofficial">Link3</a></p><p>对于有兴趣与CARV合作的生态系统建设者，了解更多关于我们的<a href="https://www.notion.so/carv-guardian/Carv-Guardian-Program-b584d0eefbc54467a92787f785734640">Guardian计划</a>。</p>]]></content>
    
    
    <summary type="html">通过CARV的研究，探讨生成型人工智能在游戏行业的潜力和影响。了解生成型AI在游戏创作中的应用、UGC的新机遇以及Web3游戏的发展趋势。</summary>
    
    
    
    
    <category term="CARV Research" scheme="https://www.nablepart.com/tags/CARV-Research/"/>
    
    <category term="生成型人工智能" scheme="https://www.nablepart.com/tags/%E7%94%9F%E6%88%90%E5%9E%8B%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/"/>
    
    <category term="游戏行业" scheme="https://www.nablepart.com/tags/%E6%B8%B8%E6%88%8F%E8%A1%8C%E4%B8%9A/"/>
    
    <category term="游戏创作" scheme="https://www.nablepart.com/tags/%E6%B8%B8%E6%88%8F%E5%88%9B%E4%BD%9C/"/>
    
    <category term="UGC" scheme="https://www.nablepart.com/tags/UGC/"/>
    
    <category term="Web3游戏" scheme="https://www.nablepart.com/tags/Web3%E6%B8%B8%E6%88%8F/"/>
    
  </entry>
  
  <entry>
    <title>BitMart与iPollo和CertiK合作，在纽约大学举办校园Web3讲座</title>
    <link href="https://www.nablepart.com/ba428d4008cc/"/>
    <id>https://www.nablepart.com/ba428d4008cc/</id>
    <published>2023-07-22T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座"><a href="#BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座" class="headerlink" title="BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座"></a><strong>BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座</strong></h2><h3 id="一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。"><a href="#一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。" class="headerlink" title="一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。"></a>一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。</h3><p><img src="https://cdn-images-1.medium.com/max/9002/1*tkVUCvGa715ADZgAwwkzRQ.jpeg"></p><p>在纽约大学与BitMart见面吧！</p><p>BitMart非常高兴地宣布，我们将于2023年5月7日下午3点至5点（EDT时间）在<strong>纽约大学斯特恩商学院</strong>举办校园Web3讲座。此次活动由<a href="https://ipollo.org/">iPollo</a>、<a href="https://www.bitmart.com/">BitMart</a>和<a href="https://www.certik.com/">CertiK</a>主办，由<a href="https://nyustern.campusgroups.com/abs/home/">纽约大学斯特恩商学院亚洲商业协会</a>和<a href="https://www.nyusterntech.com/">纽约大学斯特恩商学院技术协会</a>组织，并与<a href="https://yaleconnect.yale.edu/yea/home/">耶鲁国际学生创业家协会</a>合作。</p><p>校园讲座将包括两个部分：关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会，以及与行业领导者的交流活动。</p><h2 id="议程"><a href="#议程" class="headerlink" title="议程"></a><strong>议程</strong></h2><p>下午3:00 - 3:30 <strong>【iPollo主题演讲】</strong></p><p><strong>主题：</strong>AI奇点来临，正是Web3创业的绝佳时机。</p><p><strong>演讲者：</strong></p><p><a href="https://ir.nano.cn/corporate-information/management-team">孔建平（Jack Kong）</a>，Nano Labs董事长&#x2F;首席执行官</p><p><a href="https://ir.nano.cn/corporate-information/management-team">孔华伟（Marvin Kong）</a>，iPollo首席科学家</p><p>下午3:30 - 4:15 <strong>【小组讨论】</strong></p><p><strong>小组成员：</strong></p><p><a href="https://ir.nano.cn/corporate-information/management-team">孔华伟（Marvin Kong）</a>，iPollo首席科学家</p><p><a href="https://www.linkedin.com/in/zhong-shao-545b754/">钟绍</a>：耶鲁大学计算机科学教授，Certik联合创始人</p><p><a href="https://www.linkedin.com/in/kechen82/">Keith Chen</a>：SNZ Capital管理合伙人</p><p><strong>主持人：</strong></p><p><a href="https://www.linkedin.com/in/cliang14/">Chad Liang</a>：BitMart商务副总裁</p><p>下午4:15 - 4:30 <strong>【与演讲者的问答环节】</strong></p><p>下午4:30 - 5:00 **【比萨社交活动】</p><p><strong>立即免费RSVP：<a href="https://www.eventbrite.com/e/new-york-university-campus-web3-talks-tickets-624471921327">https://www.eventbrite.com/e/624471921327</a></strong></p><p><img src="https://cdn-images-1.medium.com/max/4502/1*-1g_Q_vILC9TqhqUsfv3mw.jpeg"></p><h3 id="关于iPollo"><a href="#关于iPollo" class="headerlink" title="关于iPollo"></a><strong>关于iPollo</strong></h3><p><a href="https://ipollo.org/">iPollo</a>是世界上第一家元宇宙基础设施服务提供商，具有实时渲染和AIGC能力，致力于实现3D元宇宙世界中的开放社交和沉浸式体验，打造新的Web 3.0生活方式。</p><p><strong>关注iPollo获取更多更新：</strong></p><p><a href="https://twitter.com/iPolloverse">Twitter</a> | <a href="https://t.me/iPolloChain">Telegram</a> | <a href="https://discord.com/invite/8VTVnApfqG">Discord</a> | <a href="https://www.youtube.com/@iPolloverse">YouTube</a> | <a href="https://medium.com/@ipollo">Medium</a></p><h3 id="关于BitMart"><a href="#关于BitMart" class="headerlink" title="关于BitMart"></a><strong>关于BitMart</strong></h3><p><a href="https://www.bitmart.com/">BitMart</a>是全球领先的数字资产交易平台。拥有全球数百万用户，位列CoinGecko的顶级加密货币交易所之一，目前提供1000多个交易对，交易费用行业内最低。BitMart不断发展壮大，对加密货币推动创新和促进金融包容性的潜力充满兴趣。要了解更多关于BitMart的信息，请访问我们的<a href="https://www.bitmart.com/">网站</a>，关注我们的<a href="https://twitter.com/BitMartExchange">Twitter</a>，或加入我们的<a href="https://t.me/BitMartExchange">Telegram</a>获取更新、新闻和促销活动。记得下载<a href="https://www.bitmart.com/app/en">BitMart App</a>，随时随地便捷交易你喜欢的加密货币。</p><p><strong>关注BitMart获取更多更新：</strong></p><p><a href="https://twitter.com/BitMartExchange">Twitter</a> | <a href="https://twitter.com/BitMartResearch">BitMart Research</a> | <a href="https://www.facebook.com/bitmartexchange/">Facebook</a> | <a href="https://t.me/BitMartExchange">Telegram</a> | <a href="https://www.tiktok.com/@bitmart.exchange">TikTok</a> | <a href="https://instagram.com/bitmart_exchange?utm_medium=copy_link">Instagram</a> | <a href="https://discord.com/invite/RTT4vweX2X">Discord</a></p><h2 id="BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座-1"><a href="#BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座-1" class="headerlink" title="BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座"></a><strong>BitMart与iPollo和CertiK合作在纽约大学举办校园Web3讲座</strong></h2><h3 id="一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。-1"><a href="#一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。-1" class="headerlink" title="一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。"></a>一场关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会。</h3><p><img src="https://cdn-images-1.medium.com/max/9002/1*tkVUCvGa715ADZgAwwkzRQ.jpeg" alt="活动海报"></p><p>与BitMart在纽约大学见面！</p><p>BitMart非常高兴地宣布，我们将于2023年5月7日从美国东部时间下午3:00至5:00在<strong>纽约大学斯特恩商学院</strong>举办校园Web3讲座。此次活动由<a href="https://ipollo.org/">iPollo</a>、<a href="https://www.bitmart.com/">BitMart</a>和<a href="https://www.certik.com/">CertiK</a>共同主办，由<a href="https://nyustern.campusgroups.com/abs/home/">纽约大学斯特恩商学院亚洲商业协会</a>和<a href="https://www.nyusterntech.com/">纽约大学斯特恩商学院科技协会</a>组织，并与<a href="https://yaleconnect.yale.edu/yea/home/">耶鲁国际学生创业家协会</a>合作。</p><p>校园讲座将分为两个部分：关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的政策和投资环境的研讨会，以及与行业领袖的交流活动。</p><h2 id="议程安排"><a href="#议程安排" class="headerlink" title="议程安排"></a><strong>议程安排</strong></h2><p>下午3:00 - 3:30 <strong>【iPollo主题演讲】</strong></p><p><strong>主题：</strong>AI奇点即将到来，对于Web3初创公司来说正是完美时机。</p><p><strong>演讲嘉宾：</strong></p><ul><li><a href="https://ir.nano.cn/corporate-information/management-team">孔建平（Jack Kong）</a>，Nano Labs董事长&#x2F;首席执行官</li><li><a href="https://ir.nano.cn/corporate-information/management-team">孔华伟（Marvin Kong）</a>，iPollo首席科学家</li></ul><p>下午3:30 - 4:15 <strong>【小组讨论】</strong></p><p><strong>小组成员：</strong></p><ul><li><a href="https://ir.nano.cn/corporate-information/management-team">孔华伟（Marvin Kong）</a>，iPollo首席科学家</li><li><a href="https://www.linkedin.com/in/zhong-shao-545b754/">钟绍</a>：耶鲁大学计算机科学教授，Certik联合创始人</li><li><a href="https://www.linkedin.com/in/kechen82/">Keith Chen</a>：SNZ Capital管理合伙人</li></ul><p><strong>主持人：</strong></p><ul><li><a href="https://www.linkedin.com/in/cliang14/">Chad Liang</a>：BitMart商务副总裁</li></ul><p>下午4:15 - 4:30 **【与演讲嘉宾问答环节】</p><p>下午4:30 - 5:00 <strong>【披萨社交活动】</strong></p><p><strong>立即免费RSVP：<a href="https://www.eventbrite.com/e/new-york-university-campus-web3-talks-tickets-624471921327">https://www.eventbrite.com/e/624471921327</a></strong></p><p><img src="https://cdn-images-1.medium.com/max/4502/1*-1g_Q_vILC9TqhqUsfv3mw.jpeg" alt="活动海报"></p><h3 id="关于iPollo-1"><a href="#关于iPollo-1" class="headerlink" title="关于iPollo"></a><strong>关于iPollo</strong></h3><p><a href="https://ipollo.org/">iPollo</a>是世界上第一家元宇宙基础设施服务提供商，具备实时渲染和AIGC能力，致力于在3D元宇宙世界中实现开放社交和沉浸式体验，创造全新的Web 3.0生活方式。</p><p><strong>关注iPollo获取更多更新：</strong></p><p><a href="https://twitter.com/iPolloverse">Twitter</a> | <a href="https://t.me/iPolloChain">Telegram</a> | <a href="https://discord.com/invite/8VTVnApfqG">Discord</a> | <a href="https://www.youtube.com/@iPolloverse">YouTube</a> | <a href="https://medium.com/@ipollo">Medium</a></p><h3 id="关于BitMart-1"><a href="#关于BitMart-1" class="headerlink" title="关于BitMart"></a><strong>关于BitMart</strong></h3><p><a href="https://www.bitmart.com/">BitMart</a>是全球领先的数字资产交易平台。拥有全球数百万用户，在CoinGecko上排名前列的加密货币交易所之一，目前提供1000多个交易对，交易费用行业内最低。BitMart不断发展壮大，对加密货币推动创新和促进金融包容性充满兴趣。要了解更多关于BitMart的信息，请访问我们的<a href="https://www.bitmart.com/">网站</a>，关注我们的<a href="https://twitter.com/BitMartExchange">Twitter</a>，或加入我们的<a href="https://t.me/BitMartExchange">Telegram</a>获取更新、新闻和促销活动。记得下载<a href="https://www.bitmart.com/app/en">BitMart App</a>，随时随地便捷交易你最喜欢的加密货币。</p><p><strong>关注BitMart获取更多更新：</strong></p><p><a href="https://twitter.com/BitMartExchange">Twitter</a> | <a href="https://twitter.com/BitMartResearch">BitMart Research</a> | <a href="https://www.facebook.com/bitmartexchange/">Facebook</a> | <a href="https://t.me/BitMartExchange">Telegram</a> | <a href="https://www.tiktok.com/@bitmart.exchange">TikTok</a> | <a href="https://instagram.com/bitmart_exchange?utm_medium=copy_link">Instagram</a> | <a href="https://discord.com/invite/RTT4vweX2X">Discord</a></p>]]></content>
    
    
    <summary type="html">BitMart与iPollo和CertiK合作，在纽约大学举办校园Web3讲座。了解关于Web3、Metaverse、AIGC、AR以及Web3和区块链行业的最新政策和投资环境。与业界领袖交流，探索Web3的未来发展。</summary>
    
    
    
    
    <category term="BitMart" scheme="https://www.nablepart.com/tags/BitMart/"/>
    
    <category term="iPollo" scheme="https://www.nablepart.com/tags/iPollo/"/>
    
    <category term="CertiK" scheme="https://www.nablepart.com/tags/CertiK/"/>
    
    <category term="纽约大学" scheme="https://www.nablepart.com/tags/%E7%BA%BD%E7%BA%A6%E5%A4%A7%E5%AD%A6/"/>
    
    <category term="校园讲座" scheme="https://www.nablepart.com/tags/%E6%A0%A1%E5%9B%AD%E8%AE%B2%E5%BA%A7/"/>
    
    <category term="Web3" scheme="https://www.nablepart.com/tags/Web3/"/>
    
    <category term="Metaverse" scheme="https://www.nablepart.com/tags/Metaverse/"/>
    
    <category term="AIGC" scheme="https://www.nablepart.com/tags/AIGC/"/>
    
    <category term="AR" scheme="https://www.nablepart.com/tags/AR/"/>
    
    <category term="区块链" scheme="https://www.nablepart.com/tags/%E5%8C%BA%E5%9D%97%E9%93%BE/"/>
    
    <category term="投资环境" scheme="https://www.nablepart.com/tags/%E6%8A%95%E8%B5%84%E7%8E%AF%E5%A2%83/"/>
    
    <category term="政策" scheme="https://www.nablepart.com/tags/%E6%94%BF%E7%AD%96/"/>
    
    <category term="业界领袖" scheme="https://www.nablepart.com/tags/%E4%B8%9A%E7%95%8C%E9%A2%86%E8%A2%96/"/>
    
    <category term="Web3发展" scheme="https://www.nablepart.com/tags/Web3%E5%8F%91%E5%B1%95/"/>
    
  </entry>
  
  <entry>
    <title>通过这3个ChatGPT扩展，让生活自动化</title>
    <link href="https://www.nablepart.com/4902559cb2db/"/>
    <id>https://www.nablepart.com/4902559cb2db/</id>
    <published>2023-07-21T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="通过这3个ChatGPT扩展，让生活自动化"><a href="#通过这3个ChatGPT扩展，让生活自动化" class="headerlink" title="通过这3个ChatGPT扩展，让生活自动化"></a><strong>通过这3个ChatGPT扩展，让生活自动化</strong></h2><p>ChatGPT可以用于自动化日常生活中的简单任务、工作流程或工作流。为了实现这一点，一种方法是在特定的业务相关任务或数据上对模型进行训练，例如常见问题或典型的客户互动。在训练后，可以将该模型合并到聊天机器人或其他应用程序中，以便自动执行这些任务。</p><h3 id="1-使用“God-In-A-Box”实现WhatsApp自动化"><a href="#1-使用“God-In-A-Box”实现WhatsApp自动化" class="headerlink" title="1. 使用“God In A Box”实现WhatsApp自动化"></a><strong>1. 使用“God In A Box”实现WhatsApp自动化</strong></h3><p><img src="https://cdn-images-1.medium.com/max/2000/1*VWcNS5ls3XlaqBJFhCrofw.png" alt="由ChatGPT驱动的个人助手“God In A Box”"></p><p>您可以使用由ChatGPT驱动的个人助手自动化各种任务，如预约安排、发送电子邮件、设置提醒等。</p><p>想象一下，您正在进行WhatsApp对话，突然发现自己无话可说或没有新的对话开场白。</p><p>任何人都可能遇到这种情况，但不必担心。</p><p>现在，我们可以利用扩展程序<a href="https://godinabox.co/">“God In A Box”</a>。借助这个附加组件的帮助，可以通过ChatGPT开始WhatsApp聊天。您只需要注册即可在WhatsApp上使用ChatGPT。</p><h3 id="2-使用Code-GPT实现VSCode自动化"><a href="#2-使用Code-GPT实现VSCode自动化" class="headerlink" title="2. 使用Code GPT实现VSCode自动化"></a><strong>2. 使用<a href="https://github.com/timkmecl/codegpt">Code GPT</a>实现VSCode自动化</strong></h3><p><img src="https://cdn-images-1.medium.com/max/2000/1*emyUpfcIPFrl7agcWWa8vw.png" alt="由ChatGPT驱动的个人助手Code GPT"></p><p>使用“CodeGPT”扩展来提高编程效率！</p><p>您是否正在寻找提高编程效率的策略？立即下载VSCode扩展程序<a href="https://github.com/timkmecl/codegpt">“CodeGPT”</a>！</p><p>该插件提供的多个功能将提高您的工作效率。您可以使用CodeGPT创建代码、解释代码、提问、重构代码、文档化代码以及发现代码中的问题。</p><p>只需输入评论并按下cmd-shift-i即可生成代码。您请求的代码将显示在新打开的窗口中。今天就试试吧，看看<a href="https://github.com/timkmecl/codegpt">CodeGPT</a>的影响！</p><h3 id="3-通过Merlin在Google-Chrome上使用ChatGPT"><a href="#3-通过Merlin在Google-Chrome上使用ChatGPT" class="headerlink" title="3. 通过Merlin在Google Chrome上使用ChatGPT"></a><strong>3. 通过<a href="https://merlin.foyer.work/onboarding/">Merlin</a>在Google Chrome上使用ChatGPT</strong></h3><p><img src="https://cdn-images-1.medium.com/max/2000/1*CqUNnKYdwZxOQCBU7HBOAQ.gif" alt="由ChatGPT驱动的个人助手Merlin"></p><p>如果您希望在Google Chrome上充分发挥ChatGPT的潜力，请下载“Merlin”。使用Merlin，您可以在Gmail和Google Sheets等网站上使用ChatGPT的功能，以及在您浏览和在线写作的任何地方。</p><p><a href="https://merlin.foyer.work/onboarding/">Merlin</a>允许您执行各种任务，就像ChatGPT一样。其中一些任务包括：</p><ul><li>对任何网站内容进行摘要：按下cmd + M，要求Merlin根据您选择的文本创建摘要。</li></ul><p>需要注意的是，尽管ChatGPT可以用于自动化某些任务，但它并不是一个独立的工具，需要与其他软件或平台集成以实现生活自动化。此外，对于预约和其他任务使用聊天机器人或人工智能个人助手可能无法替代人类互动，因为它可能缺乏人类的触感，并且无法处理意外事件或细微之处。</p><p>*了解更多内容，请访问<a href="https://plainenglish.io/">**PlainEnglish.io</a>*<em>。</em></p><p>*订阅我们的<a href="http://newsletter.plainenglish.io/">**免费周报</a>**。关注我们的<a href="https://twitter.com/inPlainEngHQ">**Twitter</a>***、<a href="https://www.linkedin.com/company/inplainenglish/">***LinkedIn</a>**、<a href="https://www.youtube.com/channel/UCtipWUghju290NWcn8jhyAw">**YouTube</a>*<em>和<a href="https://discord.gg/GtDtUAvyhW">**Discord</a>。</em></p><p>*<strong>对于扩大规模的软件创业公司感兴趣</strong>？请查看<a href="https://circuit.ooo/?utm=publication-post-cta">Circuit</a>**。</p>]]></content>
    
    
    <summary type="html">利用 ChatGPT 的扩展程序，您可以自动化日常生活中的各种任务。本文介绍了三个 ChatGPT 扩展：通过 &quot;God In A Box&quot; 在 WhatsApp 上实现自动化、通过 &quot;CodeGPT&quot; 在 VSCode 上实现自动化，以及通过 &quot;Merlin&quot; 在 Google Chrome 上使用 ChatGPT。了解如何提高生产力和解放潜力！</summary>
    
    
    
    
    <category term="ChatGPT" scheme="https://www.nablepart.com/tags/ChatGPT/"/>
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="自动化" scheme="https://www.nablepart.com/tags/%E8%87%AA%E5%8A%A8%E5%8C%96/"/>
    
    <category term="God In A Box" scheme="https://www.nablepart.com/tags/God-In-A-Box/"/>
    
    <category term="CodeGPT" scheme="https://www.nablepart.com/tags/CodeGPT/"/>
    
    <category term="Merlin" scheme="https://www.nablepart.com/tags/Merlin/"/>
    
    <category term="WhatsApp" scheme="https://www.nablepart.com/tags/WhatsApp/"/>
    
    <category term="VSCode" scheme="https://www.nablepart.com/tags/VSCode/"/>
    
    <category term="Google Chrome" scheme="https://www.nablepart.com/tags/Google-Chrome/"/>
    
    <category term="生产力" scheme="https://www.nablepart.com/tags/%E7%94%9F%E4%BA%A7%E5%8A%9B/"/>
    
    <category term="扩展程序" scheme="https://www.nablepart.com/tags/%E6%89%A9%E5%B1%95%E7%A8%8B%E5%BA%8F/"/>
    
    <category term="自动化任务" scheme="https://www.nablepart.com/tags/%E8%87%AA%E5%8A%A8%E5%8C%96%E4%BB%BB%E5%8A%A1/"/>
    
  </entry>
  
  <entry>
    <title>Walking at the forefront of technology, Baidu will make an appearance at GOTC 2023.</title>
    <link href="https://www.nablepart.com/0664448c5833/"/>
    <id>https://www.nablepart.com/0664448c5833/</id>
    <published>2023-07-20T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>The Global Open-source Technology Conference (GOTC) <strong>GOTC 2023</strong>, co-sponsored by the Open Atom Open Source Foundation, the Linux Foundation Asia Pacific, the Shanghai Pudong Software Park, and Open Source China, <strong>will be held on May 27-28 at the Zhangjiang Science Hall in Shanghai.</strong></p><p>This grand open source technology event for developers around the world will be the wind vane of open source in 2023. The conference will feature industry exhibitions, keynote speeches, thematic forums, and open source bazaars, where attendees will discuss hot technology topics such as meta-universe, 3D and gaming, eBPF, Web3.0, and blockchain, as well as hot topics such as open source community, AIGC, automotive software, AI programming, open source education and training, and cloud native, etc., to discuss the future of open source and help open source development.</p><p>By the end of 2022, Baidu has open sourced more than 1,000 projects with more than 20,000 community contributors, with technologies covering a wide range of fields such as machine learning, autonomous driving, blockchain, data storage, edge computing, big front-end, security, and more. In particular, open source projects such as Flying Paddle, PaddlePaddle, Apollo, and XuprChain have become leading technology platforms in the industry, attracting more and more developers to participate in them. The successful practice of these open source projects not only promotes the improvement of Baidu’s own technology, but also makes positive contributions to the global open source community.</p><p>Baidu is also a member of the Linux Foundation, Apache Software Foundation, CNCF Foundation, and Open Atomics Open Source Foundation. Baidu’s ECharts, Doris, bRPC, Baetyl, BFE and other ten projects have been donated to the Foundation for incubation, among which Apache ECharts, Apache Doris, and Apache bRPC have graduated and become the top projects of the Foundation.</p><p>As a leading technology company in China, Baidu will <strong>share a number of exciting topics</strong> during the two-day GOTC conference, showcasing its latest achievements and research progress in cutting-edge technology fields such as AI, autonomous driving, blockchain, cloud native, graph database, and so on, in front of a large number of industry elites and technology enthusiasts.</p><p><strong>Main Forum</strong></p><p><strong>Keynote Speech:</strong> Big Models Open New Era of AI</p><p><strong>Speaker:</strong> Hou Zhenyu | Vice President, Baidu Group</p><p><strong>Topic Forum:</strong> eBPF</p><p><strong>Topic:</strong> eBPF Technology Practice in Cloud Native Domain</p><p><strong>Speaker:</strong> Weihua Di | Cloud Native Architect, Baidu</p><p><strong>Topic Overview:</strong> (1) Introduction to BPF Technology; (2) Application of BPF Technology in Cloud Native Domain; (3) Practice of BPF Technology in Baidu Cloud Native.</p><p><strong>Topic Forums:</strong> Web3 Meta-Universe Worlds</p><p><strong>Topic:</strong> Distributed Instant Messaging Infrastructure Practice and Application Based on Blockchain</p><p><strong>Speaker:</strong> Jingbo | Deputy General Manager and Head of Technology, Baidu Blockchain</p><p><strong>Issue:</strong> The blockchain-based Web3 community is jointly owned by creators and co-builders. The community is no longer controlled by a single platform, and the platform no longer has the ability to receive dividends from a well-run community that do not match its contribution. At the same time, community co-builders can independently choose their own service providers. If the server where the community is located no longer provides services one day, users can vote to automate the migration of the entire community data and code to other service providers, so that the community, which has condensed the co-builders’ efforts in operating the community, will continue to exist and run.</p><p><strong>Featured Forum:</strong> AI is Everywhere</p><p><strong>Speaker:</strong> Zhang Jun | Baidu Flying Paddle Framework Product Leader, Open Atom Open Source Foundation TOC Member</p><p><strong>Topic:</strong> Deep Learning Platform + Big Models Compacting the Base of Industrial Intelligence</p><p><strong>Item Description:</strong> This report introduces the progress of Baidu’s Deep Learning Platform + Big Model core technology research and development, product innovation, and ecological construction, taking into account the latest development trend of generative AI and Baidu’s practice. The report also shares the thinking around Flying Paddle’s industrial-grade deep learning open source and open platform, and the development of industry-teaching integration education ecological construction under the new trend.</p><p><strong>Thematic Forum: Infrastructure and Software Architecture</strong></p><p><strong>Topic:</strong> Apache HugeGraph Distributed Storage and Computing Open Source Evolution Road</p><p><strong>Speaker:</strong> Shiming Zhang | Graph Database Leader, CVTE Research Institute</p><p><strong>Topic Summary:</strong> One year after HugeGraph joined the Apache community, we released the official 1.0 version. This year, we continue to evolve towards the new 2.0 version and promote the integration of the internal version and the open source version. This time, we share with you the current design and implementation of distributed storage and computation part, as well as how to participate in the open source community better, and lastly, the future plan.</p><p><strong>Flash Talks</strong></p><p><strong>Topic 1:</strong> Baidu Smart Edge and MQTT Message Middleware Open Source Project Overview</p><p><strong>Speaker:</strong> Huang Cheng | Director Architect, Internet of Things, Baidu Intelligent Cloud</p><p><strong>Topic 2:</strong> Smart Road OS - Open Source and Open Vehicle-Road-Cloud Integrated Intelligent Networked Roadside Operating System</p><p><strong>Speaker:</strong> Jiefeng Sha | Senior Engineer, AIR, Baidu</p><p><strong>GOTC 2023</strong> Registration is now open, and we invite open source enthusiasts from all over the world to join us!</p>]]></content>
    
    
    <summary type="html">The Global Open-source Technology Conference (GOTC) 2023, jointly initiated by the OpenAtom Foundation, the Linux Foundation Asia-Pacific, Shanghai Pudong Software Park and Open Source China, will be held at the Zhangjiang Science Hall in Shanghai on May 27-28.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="Open-source" scheme="https://www.nablepart.com/tags/Open-source/"/>
    
    <category term="GOTC" scheme="https://www.nablepart.com/tags/GOTC/"/>
    
    <category term="Baidu" scheme="https://www.nablepart.com/tags/Baidu/"/>
    
  </entry>
  
  <entry>
    <title>Amazon AIGC 产品：构建生成式人工智能应用程序的最简单方式</title>
    <link href="https://www.nablepart.com/6d2a8abe48fa/"/>
    <id>https://www.nablepart.com/6d2a8abe48fa/</id>
    <published>2023-07-20T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h1 id="Amazon-AIGC-Products-The-Easiest-Way-to-Build-Generative-AI-Applications"><a href="#Amazon-AIGC-Products-The-Easiest-Way-to-Build-Generative-AI-Applications" class="headerlink" title="Amazon AIGC Products: The Easiest Way to Build Generative AI Applications"></a>Amazon AIGC Products: The Easiest Way to Build Generative AI Applications</h1><p><img src="https://cdn-images-1.medium.com/max/7744/0*Gwdgc_fcR7ZgGNnb" alt="Photo by [Colton Sturgeon](https://unsplash.com/@coltonsturgeon?utm_source=medium&amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;utm_medium=referral)"></p><h2 id="介绍"><a href="#介绍" class="headerlink" title="介绍"></a>介绍</h2><p>亚马逊拥有数千名工程师致力于机器学习研究，因为这是他们未来成功的关键。通过使用人工智能和机器学习，亚马逊可以改进客户服务、提高运营效率并保持竞争优势。</p><p>亚马逊云技术发布了几款新的AIGC产品，包括Amazon Bedrock、Amazon EC2 Trn1n、Amazon EC2 Inf2、Titan AI和Amazon CodeWhisperer。这在市场上引起了很大的关注，许多技术巨头都试图进入这场游戏，成为人工智能领域的最新趋势。AIGC已成为最接近商业应用的技术出口，即将迎来爆发的蓝海市场。但问题是，谁将脱颖而出，突破AIGC的混乱局面呢？</p><h2 id="亚马逊的家庭桶"><a href="#亚马逊的家庭桶" class="headerlink" title="亚马逊的家庭桶"></a>亚马逊的家庭桶</h2><p>Amazon Bedrock是一项新服务，可以通过API访问来自AI21 Labs、Anthropic、Stability AI和亚马逊本身的基本大型模型。Bedrock是用户构建和扩展基于AI的生成应用程序的基本框架，可以访问包括亚马逊的Titan FM在内的强大文本和图像大型模型功能。</p><p>亚马逊还正在测试新的Titan FM，并计划在未来几个月推出两个Titan模型。第一个是生成LLM，用于摘要、文本生成、分类、开放式问题回答和信息提取等任务。第二个是嵌入LLM，将文本输入转化为包含文本语义的数值表示。亚马逊还宣布推出由AWS Trainium提供动力的Amazon EC2 Trn1n实例和由AWS Inferentia2提供动力的Amazon EC2 Inf2实例。</p><p>Trn1实例可以节省50%以上的训练成本，比其他任何EC2实例都要多，使用Trn1实例可以帮助将训练最大的深度学习模型所需的时间从数月缩短到数周甚至数天。由Inferentia2支持的实例经过优化，适用于包含数千亿参数的大规模生成AI应用模型。亚马逊还宣布预览版的Amazon Code Whisperer是一款可以根据开发者的自然语言评论和集成开发环境（IDE）中的先前代码实时生成代码建议的AI编程伴侣。</p><h2 id="CodeWhisperer的好处"><a href="#CodeWhisperer的好处" class="headerlink" title="CodeWhisperer的好处"></a>CodeWhisperer的好处</h2><p>在试用阶段，AWS进行了一项生产力挑战，使用CodeWhisperer的参与者完成任务的速度平均提高了57%，成功率提高了27%，而不使用CodeWhisperer的参与者。这是生产力的巨大飞跃！</p><h2 id="支持的语言和安全功能"><a href="#支持的语言和安全功能" class="headerlink" title="支持的语言和安全功能"></a>支持的语言和安全功能</h2><p>CodeWhisperer可用于Python、Java、TypeScript、C＃等语言，还支持Go、Kotlin、Rust、PHP和SQL等十种新语言。它还具有内置的安全扫描（由自动推理提供动力），用于发现难以检测到的漏洞并建议修复。CodeWhisperer过滤掉有偏见或不公平的代码建议，并可以标记与开源代码类似的代码建议，客户可能希望参考或许可的代码。</p><h2 id="如何入门"><a href="#如何入门" class="headerlink" title="如何入门"></a>如何入门</h2><p>CodeWhisperer对个人用户免费。任何人只需使用电子邮件账户注册，即可在几分钟内开始使用，甚至不需要AWS账户。对于企业用户，AWS提供CodeWhisperer专业版，其中包括与AWS身份和访问管理（IAM）集成的单一登录（SSO）集成和更高的安全扫描限制。总之，Amazon CodeWhisperer是开发人员想要更高效地编写代码并节省时间的优秀工具。借助其基于AI的建议和安全功能，CodeWhisperer对于任何希望提高生产力的开发人员来说都是必备之选。</p><h2 id="Trn1n实例"><a href="#Trn1n实例" class="headerlink" title="Trn1n实例"></a>Trn1n实例</h2><p>由Trainium提供动力的Trn1n实例可节省高达50%的训练成本，优化了将训练分布在连接到每秒800Gbps的第二代Elastic Fabric Adapter（EFA）网络的多个服务器上的能力。客户可以在UltraClusters中部署Trn1n实例，在同一AWS可用区中扩展至多达30,000个Trainium芯片（计算超过6ExaFLOPS），具有PB级网络。包括Helixon、Money Forward和Amazon Search团队在内的许多AWS客户使用Trn1n实例，以帮助缩短将最大深度学习模型的训练时间从数月缩短到数周甚至数天，并降低成本。</p><h2 id="新的网络优化的Trn1n实例"><a href="#新的网络优化的Trn1n实例" class="headerlink" title="新的网络优化的Trn1n实例"></a>新的网络优化的Trn1n实例</h2><p>AWS宣布了新的网络优化的Trn1n实例的正式推出，其网络带宽达到1600Gbps，为大型网络密集型模型提供比Trn1n高20%的吞吐量。新的Trn1n实例采用第三代Elastic Fabric Adapter（EFA）网络，具有更高的带宽和更低的延迟，可实现更快的训练速度和更高的性能。</p><h2 id="Trn1n实例的优势"><a href="#Trn1n实例的优势" class="headerlink" title="Trn1n实例的优势"></a>Trn1n实例的优势</h2><p>Trn1n实例提供了许多优势：</p><ul><li><p>高性能网络：新的Trn1n实例通过第三代EFA网络实现了更高的网络吞吐量和更低的延迟，提供了出色的性能。</p></li><li><p>高度可扩展：使用Trn1n实例，客户可以在UltraClusters中扩展至多达30,000个Trainium芯片，以满足对大规模训练的需求。</p></li><li><p>成本效益：Trn1n实例通过优化训练成本，帮助客户节省高达50%的训练成本，同时提供卓越的性能。</p></li><li><p>加速深度学习：Trn1n实例的推出有助于缩短深度学习模型的训练时间，将数月的训练时间缩短到数周甚至数天。</p></li></ul><h2 id="如何使用Trn1n实例"><a href="#如何使用Trn1n实例" class="headerlink" title="如何使用Trn1n实例"></a>如何使用Trn1n实例</h2><p>使用Trn1n实例进行训练非常简单。您只需在AWS控制台或使用AWS命令行界面（CLI）选择Trn1n实例类型，并将其作为训练作业的目标实例即可。您可以根据实际需求配置实例数量和规模，并利用Trn1n实例的高性能网络和可扩展性来加速训练过程。</p><h2 id="Trn1n实例的应用"><a href="#Trn1n实例的应用" class="headerlink" title="Trn1n实例的应用"></a>Trn1n实例的应用</h2><p>Trn1n实例广泛应用于各种深度学习场景，包括计算机视觉、自然语言处理、语音识别等。许多企业和研究机构使用Trn1n实例来加速模型的训练和推理，从而提高其人工智能应用的性能和效果。</p><p>总之，新的网络优化的Trn1n实例为深度学习模型的训练提供了更高的性能和更快的训练速度。借助其高性能网络、可扩展性和成本效益，Trn1n实例是开发人员和研究人员在人工智能领域中的强大工具。</p>]]></content>
    
    
    <summary type="html">使用Amazon的新一代Trn1n实例，加速深度学习模型的训练，提高生成式人工智能应用程序的性能和效果。该实例具备高性能网络、可扩展性和成本效益，是开发人员和研究人员构建生成式人工智能应用程序的最简单方式。</summary>
    
    
    
    
    <category term="Amazon AIGC" scheme="https://www.nablepart.com/tags/Amazon-AIGC/"/>
    
    <category term="Trn1n实例" scheme="https://www.nablepart.com/tags/Trn1n%E5%AE%9E%E4%BE%8B/"/>
    
    <category term="生成式人工智能应用程序" scheme="https://www.nablepart.com/tags/%E7%94%9F%E6%88%90%E5%BC%8F%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F/"/>
    
    <category term="深度学习" scheme="https://www.nablepart.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
    
    <category term="高性能网络" scheme="https://www.nablepart.com/tags/%E9%AB%98%E6%80%A7%E8%83%BD%E7%BD%91%E7%BB%9C/"/>
    
    <category term="可扩展性" scheme="https://www.nablepart.com/tags/%E5%8F%AF%E6%89%A9%E5%B1%95%E6%80%A7/"/>
    
    <category term="成本效益" scheme="https://www.nablepart.com/tags/%E6%88%90%E6%9C%AC%E6%95%88%E7%9B%8A/"/>
    
    <category term="训练加速" scheme="https://www.nablepart.com/tags/%E8%AE%AD%E7%BB%83%E5%8A%A0%E9%80%9F/"/>
    
    <category term="性能提升" scheme="https://www.nablepart.com/tags/%E6%80%A7%E8%83%BD%E6%8F%90%E5%8D%87/"/>
    
  </entry>
  
  <entry>
    <title>介绍 AIPRM for ChatGpt 扩展：提升搜索引擎排名的关键</title>
    <link href="https://www.nablepart.com/1d52f049e5f1/"/>
    <id>https://www.nablepart.com/1d52f049e5f1/</id>
    <published>2023-07-19T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="介绍-AIPRM-for-ChatGpt-扩展：提升搜索引擎排名的关键。"><a href="#介绍-AIPRM-for-ChatGpt-扩展：提升搜索引擎排名的关键。" class="headerlink" title="介绍 AIPRM for ChatGpt 扩展：提升搜索引擎排名的关键。"></a>介绍 AIPRM for ChatGpt 扩展：提升搜索引擎排名的关键。</h2><h3 id="通过-AIPRM-for-ChatGPT-扩展提升您的-SEO-游戏。"><a href="#通过-AIPRM-for-ChatGPT-扩展提升您的-SEO-游戏。" class="headerlink" title="通过 AIPRM for ChatGPT 扩展提升您的 SEO 游戏。"></a>通过 AIPRM for ChatGPT 扩展提升您的 SEO 游戏。</h3><p><img src="https://cdn-images-1.medium.com/max/5952/0*pLOy8Ovx76xoj5NR" alt="Photo by NisonCo PR and SEO on Unsplash"></p><p>在当今竞争激烈的数字领域，Google 首页的排名可能是企业成功与否的关键。</p><p>但是，创建既能吸引受众又能在搜索引擎上排名靠前的内容是一项艰巨的任务。这就是全新的 AIPRM for ChatGPT 扩展发挥作用的地方。</p><p>您听说过名为 AIPRM for ChatGPT 扩展的新聊天 GPT 扩展吗？</p><p>这个 Google Chrome 扩展是一项先进的技术，与 ChatGPT 结合使用，可以将您的网站搜索引擎优化提升到一个新的水平。</p><blockquote><p><strong>附注：这篇内容大部分由 Jasper AI 撰写，可以说是当今最先进的 AI 文案助手！</strong></p></blockquote><h2 id="AIPRM-for-ChatGPT-扩展是什么，它是如何工作的？"><a href="#AIPRM-for-ChatGPT-扩展是什么，它是如何工作的？" class="headerlink" title="AIPRM for ChatGPT 扩展是什么，它是如何工作的？"></a>AIPRM for ChatGPT 扩展是什么，它是如何工作的？</h2><script src="//gist.github.com/undefined.js"></script><p>AIPRM 代表基于人工智能的响应管理器，它是一种使用人工智能来为聊天机器人提供动力的响应管理器，使其在与用户的交互中更加富有对话和人性化。</p><p>AIPRM for ChatGPT 扩展是一个小而强大的 SEO Chrome 扩展，对于任何希望提升网站搜索引擎排名的人来说，它都是一个有价值的工具。</p><p>安装了 AIPRM for ChatGPT 扩展后，它会在您的 ChatGPT 仪表板上添加超过 2600 个提示，让您可以快速访问最重要的用于 SEO 的提示。</p><p>这些 AIPRM ChatGPT 提示利用人工智能的力量分析您选择的关键字的排名靠前的文章，并确定有助于其成功的因素。</p><p>然后，AIPRM 会生成一篇结合了这些因素的文章，确保您的内容在 Google 的首页上有最佳排名的机会。</p><p>生成的文章不仅对 SEO 友好，而且引人入胜、富有信息量。</p><p>它包括标题、副标题、项目符号等元素，使读者可以轻松消化信息。</p><p>该内容还经过优化，以确保读者在整篇文章中保持参与度。<br><a href="https://medium.com/@benardlokibiz/humanizing-chatgpt-potential-to-generate-authentic-human-sounding-content-f6f0d113ed50"><strong>Humanizing ChatGPT: Potential to Generate Authentic Human-sounding Content.</strong></a></p><p>##AIPRM for ChatGpt 扩展的好处</p><p>通过使用 AIPRM for ChatGPT 扩展，您可以提升网站的搜索引擎排名，增加网站流量，并提高用户参与度。</p><p>AIPRM for ChatGPT 扩展通过提供对最有用的 ChatGPT 提示的便捷访问，帮助您对 SEO 策略做出明智的决策。</p><p>您还可以创建自己的提示并将其添加到社区中以供使用。<br><a href="https://medium.com/@benardlokibiz/chat-gpt-vs-jasper-ai-cf8b27ecdada"><strong>Chat GPT Vs Jasper AI</strong></a></p><p>如何安装 AIPRM for ChatGpt 扩展</p><p>安装 AIPRM for ChatGPT 扩展非常简单。只需访问 <a href="https://www.aiprm.com/">https://www.aiprm.com/</a>，然后点击“安装 AIPRM for ChatGPT”，接着点击“添加到 Chrome”即可。安装完成后，AIPRM 将自动在您的 ChatGPT 仪表板上添加超过 2600 个提示。</p><p>顶级 AIPRM Chatgpt 提示用于 SEO</p><p>以下是一些最好的 AIPRM ChatGPT 提示，供您使用。</p><ol><li><p><strong>人工撰写 | 100％ 独特 | SEO 优化文章</strong> - 编写具有正确大纲的人工撰写、无抄袭的 SEO 优化长篇文章。</p></li><li><p><strong>超越文章</strong> - 基于竞争对手的 URL，通过深入的 SEO 优化文章超越竞争对手。</p></li><li><p><strong>类人重写器 v1.6</strong> - 将您的 AI 生成的文章重写为听起来像人类撰写的文章，人类生成得分达到 90-100％。</p></li><li><p><strong>关键字策略</strong> - 根据 1 个关键字创建关键字策略和 SEO 内容计划。</p></li><li><p><strong>YouTube 脚本生成器</strong> - 为您的 YouTube 视频创建引人注目的脚本想法。您输入视频的简短描述，它会生成标题、场景和完整脚本。</p></li><li><p><strong>完全 SEO 优化文章，包括常见问题</strong> - 帮助您撰写 100％ 独特、无抄袭的、完全 SEO 优化的文章，包括元描述、标题、1500 字长、常见问题、元描述等等。</p></li><li><p><strong>智能详细文章写作器（H 标签）</strong> - 在此提示中，您提供您想要撰写的文章的标题，然后提示会撰写一篇长而详细的文章，并准备好用于共享，包含 H 标签。</p></li><li><p><strong>中途提示生成器</strong> - 为您的关键字生成极为详细的中途提示。</p></li><li><p>**编写最佳智能文章 - 使用“编写最佳智能文章”提示，只需输入您想要发布的文章的标题，AIPRM 将生成一篇不仅独特且无抄袭的文章，而且还针对搜索引擎进行了优化。</p></li></ol><p><a href="https://medium.com/@benardlokibiz/7-best-ai-content-writing-tools-for-creating-quality-content-in-2023-2d218dbe9a33"><strong>2023 年创建高质量内容的 7 个最佳 AI 内容撰写工具</strong></a></p><p>结论</p><p>总之，AIPRM for ChatGPT 扩展是一项创新技术，可帮助企业提高其 SEO，并将更多流量引导到其网站。从人工撰写、无抄袭、SEO 优化的文章到关键字策略和包含常见问题的完全优化文章，AIPRM 全都具备！拥有超过 2600 个预训练的语言模型（提示）和生成高质量内容的能力，AIPRM for ChatGPT 扩展是一个强大的工具，可以帮助企业创建引人注目且内容丰富的内容，吸引潜在客户。与 ChatGPT 扩展的集成进一步增强了其功能，使企业能够为搜索引擎优化其内容并提高在线可见性。</p><p>请关注我们并订阅我们的电子邮件列表，以获取更多与人工智能相关的内容！我保证它们将是独一无二的！</p><p><strong>附注：</strong>如果您喜欢这篇文章并觉得它有帮助，请考虑<a href="https://www.buymeacoffee.com/benardloki8"><em><strong>给我买杯咖啡</strong></em></a>以支持我的工作。您的支持意义重大！</p><p>常见问题</p><ol><li><p><strong>AIPRM for ChatGPT 扩展是什么？</strong> AIPRM for ChatGPT 扩展是一个 Chrome 扩展程序，可在 ChatGPT 仪表板上添加超过 2600 个提示，让您快速访问与 SEO 最相关的提示。</p></li><li><p><strong>AIPRM 是什么？</strong> AIPRM 代表人工智能驱动的响应管理器，它是一种使用人工智能来驱动聊天机器人的响应管理器，使其在与用户互动时更具对话性和类似人类的特点。</p></li><li><p><strong>ChatGPT 的 Chrome 提示扩展是什么？</strong> 用于 ChatGPT 的 Chrome 提示扩展名为 AIPRM for ChatGPT 扩展。它是一种人工智能驱动的响应管理器，可在 ChatGPT 仪表板上添加超过 2600 个提示，让您快速访问与 SEO 最重要的提示相关。</p></li><li><p><strong>如何在 Chrome 中安装 AIPRM 扩展？</strong> 安装 AIPRM for ChatGPT 扩展非常简单。只需访问 <a href="https://www.aiprm.com/">https://www.aiprm.com/</a>，然后点击安装 AIPRM for ChatGPT，再点击“添加到 Chrome”。安装完成后，AIPRM 将自动在您的 ChatGPT 仪表板上添加超过 2600 个提示。</p></li><li><p><strong>AIPRM for ChatGPT 扩展如何工作？</strong> AIPRM for ChatGPT 扩展使用人工智能分析您选择的关键字的排名靠前的文章，并确定对其成功有所贡献的因素。然后，AIPRM 生成一篇包含这些因素的文章，确保您的内容在谷歌的第一页上有最佳排名的机会。生成的文章不仅对 SEO 友好，而且引人入胜且信息丰富。它包括标题、副标题、项目符号和其他元素，使读者能够轻松消化信息。该内容还经过优化，以确保读者在整篇文章中保持参与度。</p></li><li><p><strong>使用 AIPRM for ChatGPT 扩展的好处是什么？</strong> 使用 AIPRM for ChatGPT 扩展，您可以提高网站的搜索引擎排名，增加网站的流量，并增加用户参与度。AIPRM for ChatGPT 扩展通过提供对最有用的 ChatGPT 提示的便捷访问，帮助您做出有关 SEO 策略的明智决策。</p></li><li><p><strong>如何安装 AIPRM for ChatGPT 扩展？</strong> 安装 AIPRM for ChatGPT 扩展非常简单。只需访问 <a href="https://www.aiprm.com/">https://www.aiprm.com/</a>，然后点击安装 AIPRM for ChatGPT，再点击“添加到 Chrome”。安装完成后，AIPRM 将自动在您的 ChatGPT 仪表板上添加超过 2600 个提示。</p></li><li><p><strong>一些顶级的 AIPRM ChatGPT 提示用于 SEO 是什么？</strong> 一些顶级的 AIPRM ChatGPT 提示用于 SEO 包括：Human Written|100% Unique| SEO optimized Article（人工撰写|100% 独特| SEO 优化文章）、Outrank Article（超越竞争对手的文章）、Human-like Rewriter v1.6（人类风格的重写器 v1.6）、Keyword Strategy（关键字策略）、YouTube Script Creator（YouTube 脚本创建器）、Fully SEO Optimized Article including FAQs（包含常见问题的完全优化的文章）、Smart detailed article writer (H tags)（智能详细文章编写器（使用 H 标签））、Midjourney Prompt Generator（中间阶段提示生成器）</p></li></ol>]]></content>
    
    
    <summary type="html">本文将指导您完成安装 AIPRM for ChatGPT 扩展的简单步骤，该扩展可提高网站的搜索引擎排名并增加用户参与度。了解如何安装并获得超过2600个有用的ChatGPT提示。</summary>
    
    
    
    
    <category term="AIPRM for ChatGPT" scheme="https://www.nablepart.com/tags/AIPRM-for-ChatGPT/"/>
    
    <category term="扩展安装" scheme="https://www.nablepart.com/tags/%E6%89%A9%E5%B1%95%E5%AE%89%E8%A3%85/"/>
    
    <category term="SEO策略" scheme="https://www.nablepart.com/tags/SEO%E7%AD%96%E7%95%A5/"/>
    
    <category term="网站排名" scheme="https://www.nablepart.com/tags/%E7%BD%91%E7%AB%99%E6%8E%92%E5%90%8D/"/>
    
    <category term="用户参与度" scheme="https://www.nablepart.com/tags/%E7%94%A8%E6%88%B7%E5%8F%82%E4%B8%8E%E5%BA%A6/"/>
    
    <category term="ChatGPT提示" scheme="https://www.nablepart.com/tags/ChatGPT%E6%8F%90%E7%A4%BA/"/>
    
  </entry>
  
  <entry>
    <title>AIGC+ 全球项目路演，Hemi — 基于“信息”而非“人”的即时社交网络</title>
    <link href="https://www.nablepart.com/a7228402c47e/"/>
    <id>https://www.nablepart.com/a7228402c47e/</id>
    <published>2023-07-18T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="AIGC-全球项目路演，Hemi-—-基于“信息”而非“人”的即时社交网络。"><a href="#AIGC-全球项目路演，Hemi-—-基于“信息”而非“人”的即时社交网络。" class="headerlink" title="AIGC+ 全球项目路演，Hemi — 基于“信息”而非“人”的即时社交网络。"></a><strong>AIGC+ 全球项目路演，Hemi — 基于“信息”而非“人”的即时社交网络。</strong></h2><p><em><strong>人工智能正在改变人们的社交互动方式，而名为“Half Zone”的去中心化实时空间已成为大学生日常生活的高效工具。“随时随地，凭借任何想法或需求，您可以更快地找到合适的人并立即与他们聊天。”</strong></em></p><script src="//gist.github.com/undefined.js"></script>> # 2023 年 2 月 28 日，Hemi 在 Edu3.0 Cube 举办的 AIGC+ 路演闭门会议上荣获“创意点子”奖。<p><img src="https://cdn-images-1.medium.com/max/2846/1*jzQdcgkUg_09PegjeLuWVA.png"></p><p>Hemi 是一个去中心化的即时群组空间。通过结合 NLP 技术、人工智能、推荐算法和概念以及对卓越产品设计的追求，Hemi 的目标是使用户能够随时随地、在任何方向上与适合的群体进行社交、讨论和聊天，从而在最合适的主题和社区中进行互动。</p><p><img src="https://cdn-images-1.medium.com/max/2780/1*pJQxp0zmPpkIhmtudOLj3Q.png"></p><h2 id="特点：即时性、易用性、去中心化"><a href="#特点：即时性、易用性、去中心化" class="headerlink" title="特点：即时性、易用性、去中心化"></a>特点：即时性、易用性、去中心化</h2><p><strong>1. 即时性</strong></p><blockquote><p><em>每个群组都基于实时通信。实时语音聊天和即时消息传递使用户能够以最快、最有效的方式进行沟通和连接。</em></p></blockquote><p><strong>2. 易用性</strong></p><blockquote><p><em>不再有加入或离开群组以及添加好友等“反社交”体验。用户可以从实时的思想&#x2F;信息开始聊天，每个参与的实时群组将成为用户未来快速定位和查找的轨迹。</em></p></blockquote><p><strong>3. 去中心化</strong></p><blockquote><p><em>每个实时群组存在于一个独立的 URL 上，可以出现在互联网的任何地方。该产品没有集中式的推荐算法，降低了每个用户参与的难度。</em></p></blockquote><p><img src="https://cdn-images-1.medium.com/max/3540/1*ZplpTtxLy8eACcLOSDlJ9g.png"></p><h2 id="项目亮点"><a href="#项目亮点" class="headerlink" title="项目亮点"></a><strong>项目亮点</strong></h2><p><strong>1. 打破信息连接的限制</strong></p><blockquote><p>*Hemi 自主开发的社交搜索引擎通过类似传统搜索的搜索行为实时连接匹配的讨论群组，并立即开始讨论和聊天。该搜索引擎不再将信息与信息连接起来，而是通过信息社交偏好和需求匹配. 社交偏好和需求匹配<br> <em>通过 Hemi 自主开发的深度学习支持的社交偏好和需求匹配引擎，用户可以与相同内容信息下最匹配的群组进行实时聊天。</em></p></blockquote><p><strong>3. 与 AI 助手进行实时聊天</strong></p><blockquote><p> <em>通过将 AI 助手与搜索引擎整合，实时生成聊天室，并找到最匹配的在线伙伴，开始讨论。</em></p></blockquote><p><strong>4. 即时群组搜索引擎</strong></p><blockquote><p> <em>Hemi 的 AI 网络助手以 NLP、深度学习和社交需求模型为驱动，可以准确匹配用户在大量即时群组中的任何即时想法&#x2F;需求。</em></p></blockquote><p><strong>5. AI 网络助手</strong></p><blockquote><p> <em>个人助手通过生成式 AI 技术驱动，根据用户的信息&#x2F;需求随时随地生成与之最匹配的实时聊天室。</em></p></blockquote><p><img src="https://cdn-images-1.medium.com/max/2000/1*Lh3pBmPHyeNwOVz30int4A.png"></p><h2 id="在即时聊天中的-AIGC"><a href="#在即时聊天中的-AIGC" class="headerlink" title="在即时聊天中的 AIGC"></a><strong>在即时聊天中的 AIGC</strong></h2><p><em>从 Hemi 项目的应用场景中，我们可以看到 AIGC 技术在即时聊天领域有几个发展方向：</em></p><ol><li><p><strong>智能客服</strong></p><blockquote><p><em>AIGC 技术的主要应用之一是智能客服，它使机器人能够取代人工客服代表进行对话。随着人工智能技术的不断进步，这项技术的准确性和效率将继续提高，为客户提供更好的服务。</em></p></blockquote></li><li><p><strong>个性化对话</strong></p><blockquote><p><em>AIGC 技术可以通过分析用户的聊天记录和历史数据来了解他们的需求和偏好，并提供个性化的对话服务。这一发展在 Hemi 中得到充分体现，这项技术将使聊天机器人更加接近人类，从而提高客户的满意度和忠诚度。</em></p></blockquote></li></ol><p><img src="https://cdn-images-1.medium.com/max/2600/1*73bACF86Qeq6XPdtWnYx1w.png"></p><h2 id="未来发展"><a href="#未来发展" class="headerlink" title="未来发展"></a><strong>未来发展</strong></h2><p><em>AIGC 技术在即时通讯中的未来发展方向可能包括：</em></p><ol><li><p><strong>深度个性化：</strong></p><blockquote><p><em>AIGC 技术可以通过分析更复杂的用户数据实现深度个性化。这项技术将使聊天机器人更好地理解用户的需求和偏好，并提供更定制化的服务和支持。</em></p></blockquote></li><li><p><strong>自主学习：</strong></p><blockquote><p><em>AIGC 技术将逐渐实现自主学习，即聊天机器人可以通过自我纠正和优化不断提高准确性和效率。这项技术将使聊天机器人更具适应性和智能性。</em></p></blockquote></li><li><p><strong>跨领域应用：</strong></p><blockquote><p><em>AIGC 技术将应用于医疗、金融、教育等不同领域。这项技术将使聊天机器人更加专业和准确，为用户提供更高质量的服务和支持。</em></p></blockquote></li><li><p><strong>语音交互：</strong></p><blockquote><p><em>AIGC 技术将逐渐实现语音交互，即用户可以通过语音与聊天机器人进行对话。这项技术将使聊天机器人更加便捷和用户友好，为用户提供更自然的交互方式。</em></p></blockquote></li></ol><p>总的来说，AIGC 技术将逐渐在未来实现更深入、更智能和跨领域的应用，为用户提供更加定制化和高质量的服务和支持。同时，聊天机器人将成为人们日常生活中不可或缺的一部分，为人们的生活带来更多的便利和智能。</p><blockquote><h1 id="再次祝贺-Hemi-获得此奖项！"><a href="#再次祝贺-Hemi-获得此奖项！" class="headerlink" title="再次祝贺 Hemi 获得此奖项！"></a><strong>再次祝贺 Hemi 获得此奖项！</strong></h1><h1 id="创业很艰难，路演并不是为了确定谁是最好的，而是为创业者建立沟通的桥梁。"><a href="#创业很艰难，路演并不是为了确定谁是最好的，而是为创业者建立沟通的桥梁。" class="headerlink" title="创业很艰难，路演并不是为了确定谁是最好的，而是为创业者建立沟通的桥梁。"></a>创业很艰难，路演并不是为了确定谁是最好的，而是为创业者建立沟通的桥梁。</h1></blockquote><p><img src="https://cdn-images-1.medium.com/max/4404/1*PfBQhMkLVG2cW9Baeq2cAA.png"></p>]]></content>
    
    
    <summary type="html">AIGC+全球项目路演介绍了Hemi，一种基于“信息”而非“人”的即时社交网络。通过结合AI、NLP技术和优秀的产品设计，Hemi旨在实现用户随时随地在最合适的话题和社区中与固定群体进行社交、讨论和聊天</summary>
    
    
    
    
    <category term="AIGC+" scheme="https://www.nablepart.com/tags/AIGC/"/>
    
    <category term="全球项目路演" scheme="https://www.nablepart.com/tags/%E5%85%A8%E7%90%83%E9%A1%B9%E7%9B%AE%E8%B7%AF%E6%BC%94/"/>
    
    <category term="Hemi" scheme="https://www.nablepart.com/tags/Hemi/"/>
    
    <category term="信息" scheme="https://www.nablepart.com/tags/%E4%BF%A1%E6%81%AF/"/>
    
    <category term="社交网络" scheme="https://www.nablepart.com/tags/%E7%A4%BE%E4%BA%A4%E7%BD%91%E7%BB%9C/"/>
    
    <category term="即时聊天" scheme="https://www.nablepart.com/tags/%E5%8D%B3%E6%97%B6%E8%81%8A%E5%A4%A9/"/>
    
    <category term="NLP 技术" scheme="https://www.nablepart.com/tags/NLP-%E6%8A%80%E6%9C%AF/"/>
    
    <category term="人工智能" scheme="https://www.nablepart.com/tags/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/"/>
    
    <category term="推荐算法" scheme="https://www.nablepart.com/tags/%E6%8E%A8%E8%8D%90%E7%AE%97%E6%B3%95/"/>
    
    <category term="社交偏好" scheme="https://www.nablepart.com/tags/%E7%A4%BE%E4%BA%A4%E5%81%8F%E5%A5%BD/"/>
    
    <category term="实时群组" scheme="https://www.nablepart.com/tags/%E5%AE%9E%E6%97%B6%E7%BE%A4%E7%BB%84/"/>
    
    <category term="智能客服" scheme="https://www.nablepart.com/tags/%E6%99%BA%E8%83%BD%E5%AE%A2%E6%9C%8D/"/>
    
    <category term="个性化对话" scheme="https://www.nablepart.com/tags/%E4%B8%AA%E6%80%A7%E5%8C%96%E5%AF%B9%E8%AF%9D/"/>
    
    <category term="跨领域应用" scheme="https://www.nablepart.com/tags/%E8%B7%A8%E9%A2%86%E5%9F%9F%E5%BA%94%E7%94%A8/"/>
    
    <category term="语音交互" scheme="https://www.nablepart.com/tags/%E8%AF%AD%E9%9F%B3%E4%BA%A4%E4%BA%92/"/>
    
  </entry>
  
  <entry>
    <title>Why are financial markets so unpredictable?</title>
    <link href="https://www.nablepart.com/7188c32b8805/"/>
    <id>https://www.nablepart.com/7188c32b8805/</id>
    <published>2023-07-17T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>Financial problems are difficult to predict. The stock market is well organized and contributes to job creation as a useful source of business financing, but it is also fraught with risk. In the foreign exchange market, traders convert dollars into euros, yen, rubles, or pounds sterling, primarily to make a small percentage of profit on a very large transaction. Professional traders and traders use their experience to keep the risk as low and the profit as high as possible.</p><p>But the stock market is more complex than horse racing, and traders now rely on complex algorithms that are mathematical models running on computers. Many trades have been automated: algorithms make split-second decisions and trade without any human intervention.</p><p>All of these developments are motivated by the desire to make financial problems more predictable, reduce uncertainty, and thus reduce risk. <strong>The financial crisis happened precisely because too many bankers thought they had done so.</strong> As it turns out, they might as well have looked into a crystal ball.</p><p>This is not a new problem.Between 1397 and 1494, in Renaissance Italy, the powerful Medici family ran a bank that was the largest and most respected in all of Europe. At one time it made the Medici family the richest in Europe. in 1397, Giovanni di Bicci de’Medici spun off his own portion of his nephew’s bank and moved it to Florence. The bank continued to expand, with branches in Rome, Venice, and Naples, before spreading its tentacles to Geneva, Bruges, London, Pisa, Avignon, Milan, and Lyon. All seemed to be going well under the rule of Cosimo de’Medici until his death in 1464, when his son Piero took over.</p><p>Behind the scenes, however, the Medici family was a profligate spendthrift: from 1434 to 1471, they spent around 17,000 gold florins a year. This is the equivalent of 20 to 30 million dollars today.</p><p>Hubris begets retribution, and the inevitable collapse began with the Lyon branch, which had a dishonest manager. The London branch then made a large loan to the ruler of the time, a risky decision given that the king and queen were somewhat unpredictable and had a reputation for not repaying their debts, and in 1478 it collapsed with a total loss of 51,533 gold florins. The Bruges branch made the same mistake. According to Niccolo Machiavelli, Piero tried to shore up the finances by taking on debt, which in turn put several local businesses out of business and annoyed many influential people.</p><p>Branch after branch failed, and when the Medici family fell out of favor and lost political influence in 1494, the end was in sight. However, even though the Medici were still the largest bank in Europe at that time, a mob razed the central bank in Florence and the Lyon branch was subject to a hostile takeover. The manager of Lyon had approved too many bad loans and borrowed heavily from other banks to cover up the disaster.</p><p>This all sounds very familiar: during the dot-com bubble of the 1990s, when then-Federal Reserve Chairman Alan Greenspan gave a speech in 1996 decrying the market as “irrationally exuberant,” investors sold off their vastly profitable brick-and-mortar properties and gambled them against what groups of kids could whip up in their attics with their computers and modems. as “irrational exuberance”. But no one cared until the dot-com crash of 2000. By 2002, a total of $5 trillion had been lost in market capitalization.</p><p>It’s happened many times before. 17th-century Holland was prosperous and confident, and it reaped huge profits from trade with the Far East. Tulips, a rare flower from Turkey, became a status symbol, and their prices soared, leading to a “tulip mania” that spawned a specialized tulip exchange. Speculators buy stock and hold it in their hands, creating artificial scarcity to drive up prices. A futures market for contracts to buy and sell tulips at a future date was created. <strong>By 1623, a rare tulip cost more than an Amsterdam merchant’s house. When the bubble burst, the Dutch economy was set back 40 years.</strong></p><p>In 1711, British entrepreneurs formed a company to “manage and co-ordinate the merchants of Great Britain in the trade of the South Pacific and other parts of America, and also to encourage the fisheries”. The British king granted it a monopoly on South American trade. Speculators drove up the price tenfold, and a series of bizarre spin-off companies were formed. One very famous prospectus said, “In a business with great advantages, but no one knows exactly what.”</p><p>Again, this is nonsense. <strong>When sanity was restored, the market collapsed: ordinary investors lost their life savings, while major shareholders and directors had long since fled the market.</strong> In the end, it took Robert Walpole, the first British Treasury Minister, who sold off all his shares at the peak and split the debt between the government and the East India Company, to restore order. Directors were forced to compensate investors, but many more of the worst offenders remain at large.</p><p>When the financial bubble burst, Newton, then director of the Mint, hoping to use it to understand high finance, commented, “I can calculate the motion of the stars, but not the madness of mankind.” It took quite some time before mathematically minded academics began to study market mechanisms, and in the meantime, they even began to focus on how to make rational decisions, or at least make the best estimates of which behaviors are rational.</p><h2 id="The-Underrated-Brownian-Motion-Model"><a href="#The-Underrated-Brownian-Motion-Model" class="headerlink" title="The Underrated Brownian Motion Model"></a>The Underrated Brownian Motion Model</h2><p>Anyone who reads the financial pages of a newspaper or follows the stock market on the internet will quickly realize that the volume and price of stocks can change in an irregular and unpredictable way. Figure 1 shows how the FTSE 100 index (the composite price of the top 100 companies on the UK stock market) has changed between 1984 and 2014. It looks more like a random walk than a smooth curve.</p><p>Baselier discovered this similarity and used a physical process called Brownian motion to model changes in share prices. in 1827, the Scottish botanist Robert Brown, while looking through a microscope at particles in the cavities of pollen grains suspended in water, noticed that the particles were randomly shaking, but was unable to explain why. in 1905, Albert Einstein proposed that particles collide with water molecules. water molecules were colliding. His mathematical analysis of this physical phenomenon led many scientists to believe that matter was made up of atoms (a surprisingly controversial concept in 1900.) In 1908, Jean Perrin confirmed Einstein’s explanation.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291833294.png"></p><p>Using the Brownian motion model, Baselier answered a statistical question about the stock market: how do expected prices (statistical averages) vary over time? Specifically, what does the probability density of prices look like? And how does it evolve?</p><p>Baselier gave an estimate of the most likely future price, and the range of possible fluctuations relative to that price. He provided a probability density equation now known as the Kolmogorov-Chapman equation and solved it to obtain a normal distribution whose variance (spread) increases linearly over time.</p><p>We now know that this is the probability density of the diffusion equation, and this equation is also called the heat transfer equation because that’s where it first appeared. If you heat a metal pan on the stove, the handle gets hot, even though it is not in direct contact with the heat source, because the heat diffuses through the metal. 1807, Fourier gave a “heat conduction equation” that governs this process. The same equation applies to other types of diffusion, such as the diffusion of a drop of ink in water. <strong>Baselier proved that in the Brownian motion model, the price of an option spreads like heat.</strong></p><p>He also developed a second method using random walks. <strong>If the random walk is taken at smaller and smaller paces and at faster and faster speeds, it can be approximated as Brownian motion.</strong> He noted that this concept would give the same result. He then calculated how the price of a “stock option” should change over time (A stock option is a contract to buy or sell a commodity at a fixed price at a future date. These contracts can be bought and sold, and the appropriateness of the purchase or sale depends on the actual price movement of the commodity). By understanding how the current price spreads, we can get the best estimate of the actual future price.</p><p>The paper had a lukewarm reception, probably because it had a less common field of application, but it passed and was published in a high quality scientific journal. Baselier’s career was then ruined by a tragic misunderstanding. He continued to study diffusion and related probabilistic topics and became a professor at the Sorbonne in France, however, when World War I broke out, he joined the army. After the war, and after some temporary academic work, he applied for a permanent position in Dijon.</p><p>Maurice Gevrey, who was responsible for evaluating the application, thought he had found a major error in one of Baselier’s essays, which was echoed by expert Paul Levy. Bachelier’s career was ruined. But they both misunderstood his notation; it was not wrong. Baselier wrote a letter of righteous indignation about it, but to no avail. Eventually Levi realized that Baselier had not been wrong all along, and after apologizing, they made up. Even then, however, Levi never became interested in the application of the paper to the stock market. He commented on the paper in his notebook, “So much about finance!”</p><p>Baselier’s analysis of how the value of stock options changes over time using stochastic fluctuations was eventually embraced by mathematical economists and market researchers. The goal was to understand the behavior of the market in which options (not just the underlying commodity) are traded. A fundamental problem was to find rational ways to price options, that is, **everyone could use the same rules to figure out prices for the things they cared about separately. ** This makes it possible to assess the risk involved in a particular trade, and thus incentivizes market activity.</p><h2 id="The-Overrated-Black-Scholes-Pricing-Model"><a href="#The-Overrated-Black-Scholes-Pricing-Model" class="headerlink" title="The Overrated Black-Scholes Pricing Model"></a>The Overrated Black-Scholes Pricing Model</h2><p>In 1973, Fischer Black and Myron Scholes published “Options and Corporate Debt Pricing” in the Journal of Political Economy. Over the previous decade, they had developed a mathematical formula to determine a reasonable price for a given option. Experiments with trading using this formula were not very successful, so they decided to make their reasoning public. Robert Merton provided a mathematical explanation of their formula, which came to be known as the Black-Scholes option pricing model. It distinguishes fluctuations in the price of an option from the risk of the underlying commodity, leading to a trading strategy known as delta hedging: <strong>In a sense, repeatedly buying and selling the underlying commodity to eliminate the risk associated with the option.</strong></p><p>The Black-Scholes model is a partial differential equation, known as the Black-Scholes equation, which is closely related to the diffusion equation that Baselier distilled from Brownian motion. Numerical methods can be used to find the optimal price of an option in any situation. The fact that a single “reasonable” price could be calculated (even though it was based on a specific model that might not be applicable in reality) was enough to convince financial institutions to use it, and a huge options market was born.</p><p>The mathematical assumptions used in the Black-Scholes equation are not entirely consistent with reality. One important reason is that the probability distribution of the implied diffusion process is normal, so extreme events are unlikely to occur. In fact, such events are much more common, a phenomenon known as thick tails. There is a class of probability distributions known as stable distributions that are made up of four parameters, three of which are shown in Figure 2, with each of their key parameters corresponding to a specific value. When this parameter is 2, we get the normal distribution (gray curve), which has no thick tails. The other two distributions (black curve) have thick tails: on both sides of the graph, the black curve is above the gray curve.</p><p>The black curve is above the gray curve on both sides of the graph.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310291837913.png"></p><p><strong>Using a normal distribution to model financial data that actually has thick tails greatly underestimates the risk of extreme events.</strong> With or without thick tails, these events are rare compared to normal, but the thick tails make them common enough to pose a serious problem. Of course, extreme events can cost you a ton of money. Unexpected shocks, such as sudden political upheaval or the collapse of a large company, can make extreme events more likely than the thick-tailed distribution would predict. The Internet bubble and the 2008 financial crisis were both associated with such unexpected risks.</p><p>Despite these problems, the Black-Scholes equation is widely used for its utility: it is easy to calculate and most of the time gives a good approximation of what happens in the real market. Billionaire and investor Warren Buffett has warned that “the Black-Scholes equation is as close to sacred as you can get in finance. …… However, if the equation is applied over longer time periods, it can lead to absurd results. To be fair, Black and Scholes must have understood this. But their devoted followers may have overlooked the cautionary note that accompanied the formula when they both first publicized it.”</p>]]></content>
    
    
    <summary type="html">Is fitting the real world with mathematical models reliable or not? For centuries, mathematicians and physicists have looked to probability theory and statistics to make sense of the uncertainties of various worlds, but they have found that financial markets have always been a difficult area to depict mathematically. In Who&#39;s Rolling the Dice, science writer Ian Stewart skillfully establishes an accessible and imaginative mathematical framework that shows the impact of uncertainty in many areas, including financial markets, from the perspectives of probability theory, statistics, Bayesian methods, and chaos theory.</summary>
    
    
    
    <category term="Financial" scheme="https://www.nablepart.com/categories/Financial/"/>
    
    
    <category term="Mathematics" scheme="https://www.nablepart.com/tags/Mathematics/"/>
    
    <category term="Probability" scheme="https://www.nablepart.com/tags/Probability/"/>
    
    <category term="Statistics" scheme="https://www.nablepart.com/tags/Statistics/"/>
    
    <category term="Bayesian" scheme="https://www.nablepart.com/tags/Bayesian/"/>
    
  </entry>
  
  <entry>
    <title>Li Yanhong and others have already made a fortune before I even started making money.</title>
    <link href="https://www.nablepart.com/a5ac7f2c7385/"/>
    <id>https://www.nablepart.com/a5ac7f2c7385/</id>
    <published>2023-07-17T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>With many large models announced to be open to the public, the era has come when everyone can “use” AI. In just over half a year, 130 large model companies have emerged in China, and the “Battle of 100 Models” is truly worthy of its name. Now, the knockout rounds have begun, and the large-model products have finished their internal testing and are sitting on the card table, waiting for market testing. Large model manufacturers, which were pushed into the second half overnight, began to face new problems: How should they fight the application war?</p><p>The first half of September was the busiest period in the domestic large model field.</p><p>As Tencent announced the opening of its large model “Hunyuan” on September 15, BAT also officially “joined forces” in the field of general large models. In the past half month, more than a dozen large model products have announced that they have passed the “Interim Measures for the Management of Generative Artificial Intelligence Services” and are open to the public.</p><p>As if overnight, the field of domestic large models has been pushed into the second half of the competition. If the main focus of the first half was on technology, the big models, which had been in internal testing until now, have rushed into the sea of ​​​​people, waiting for the test of the market.</p><p>If you want to survive, you have to use it. Large model manufacturers are non-stop “rolling up” AI applications. In September, Baidu upgraded its intelligent cloud Qianfan large model platform and released 11 AI native applications; after Alibaba’s Tongyi Qianwen large model was opened, Taobao’s native large model AI application “Taobao Wenqi” also launched internal testing.</p><p>But this also means that 130 large model companies, especially 40% of general large model companies, are facing a “battle royale” of life and death elimination.</p><h2 id="Are-“Wen-Xin-Yi-Yan”-useful"><a href="#Are-“Wen-Xin-Yi-Yan”-useful" class="headerlink" title="Are “Wen Xin Yi Yan” useful?"></a>Are “Wen Xin Yi Yan” useful?</h2><p>For Chinese big models, August 31st is a day worth remembering.</p><p>At 0:00 on August 31st, Baidu “Kadian” announced that its large model Wen Xinyiyan would be open to the public, firing the first shot. An hour later, at around 1 a.m., Zhipu AI announced that the large model of Zhipu Qingyan was open; at 3 a.m., Baichuan Intelligence also pushed the news that the large model would be open to the public. In the next half month, more than a dozen companies have announced that their large models are open to the public.</p><p>The concept of large models has only been popular for more than half a year, but in China, according to CCID Consulting data, a total of 130 large models have been released as of July, of which general large models account for 40%. But previously, most large model products were still in the internal testing stage.</p><p>According to the “Generative Artificial Intelligence Service Management Measures (Draft for Comments)” that came into effect on August 15 this year, companies need to declare a security assessment to the national cyberspace department and conduct a security assessment before releasing generative artificial intelligence (AIGC) products. Algorithm filing. This has also become the only way for domestic AIGC products to go on the market before compliance.</p><p>Now that the boots have landed, the domestic models have entered the second half of their run overnight. After working on the algorithm and parameters for more than half a year during the internal testing phase, whether the large model is easy to use or not, whether it is a mule or a horse, it has to be taken out for a walk.</p><p>The first battle started from the moment it was officially announced. Big models are racing against time to release news, not only for time, but also for traffic.</p><p>Opening the C-side entrance is of great significance to large model manufacturers. It can not only improve the public awareness of large models, but also accumulate a large amount of data and train models. Before the winner of the domestic “Battle of 100 Models” has been decided, whoever gets the ticket to open the gate first means that whoever gets the time advantage.</p><p>Judging from the large model products that have passed the approval, they have also chosen “personal AI assistant” for layout, focusing on tools that cover most of the daily scenarios of users, from work to entertainment, trying to become the “super entrance” for C-side applications.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/1695078617651.png"></p><p>For example, Wenxinyiyan can provide applications in a variety of different scenarios, such as content creation, AI painting, translation, AI office, etc. The categories are also divided into more detailed categories, including writing outlines for PPT, writing research reports, and writing work daily reports. , writing copy for Xiaohongshu, testing user MBTI, and playing with some trendy words and “nonsense literature” on the Internet. Based on the Skylark model, Byte has launched a public beta of the developed AI dialogue product “Doubao”, which also integrates functions such as learning assistant, writing assistant and dialogue.</p><p>The more people use it, the more complaints are inevitable. One user said that he had asked several products on August 31 using the question “How many large models have been opened?” and the answers were different. Large language models are prone to the “illusion” problem of induction and nonsense, which still exists in various products. On social media, there are also a lot of jokes about “molesting” big models. For example, some bloggers use various “nonsensical” questions in Baidu Tieba’s “Retarded Bar” to ask different big models.</p><p>But the traffic effect brought by opening up is huge. According to official data released by Baidu, within 24 hours of opening, Wenxin responded to more than 33.42 million questions from netizens, and the App was downloaded more than 1 million times. On August 31, it topped the list of free apps in the Apple App Store. A few days later, after the Spark Cognitive Large Model was opened, it replaced Wen Xinyiyan and topped the App Store free list that day.</p><p>The rapid influx of traffic also brings pressure on computing power. After Wen Xinyiyan and Shangtang’s large models were released, there was a queue for downloading. Baidu officials once responded: “The traffic has exceeded expectations and new computing power is being mobilized. Please reduce your teasing and make way for users for work and study purposes.”</p><p>Prior to this, the only generative artificial intelligence application that “swiped the screen” was the Miaoya Camera incubated by Alibaba Entertainment. Next, major manufacturers will also reconstruct based on the large model and “remake” all products. Perhaps more “hot” applications like Miaoya Camera will also appear.</p><p>But for large model manufacturers, the more important thing is whether they can “support” themselves after facing market testing?</p><h2 id="Knockout-Round-Starts-Volume-Industry-Volume-Application"><a href="#Knockout-Round-Starts-Volume-Industry-Volume-Application" class="headerlink" title="Knockout Round Starts: Volume Industry, Volume Application"></a>Knockout Round Starts: Volume Industry, Volume Application</h2><p>At the end of the first half, the brutal elimination round of the big model kicked off. Shen Dou, executive vice president of Baidu Group and president of Baidu Intelligent Cloud Business Group, mentioned in a conversation with the media that with the liberalization of models, it is easier to judge the pros and cons, and many large models on the market will “quickly disappear.”</p><p>Cheng Hao, founding partner of Yuanwang Capital and founder of Xunlei, also mentioned that if there are more latecomers in the general layer, it will not be of much significance. Although the general big models have not yet decided the winner, this field has become “a game for the few.” He estimated that no domestic investors would look at such a large base model again.</p><p>Large models have also reached a turning point in commercialization: if you want to survive, “use it” is the last word.</p><p>On the path to commercialization, ChatGPT has set an example for latecomers: when its registered users cross the 100 million mark, it begins to charge subscription fees for the C-side, which is US$20 per month; but the money on the B-side is always its focus. It opens APIs (application programming interfaces) for enterprises and developers and charges a fee. In August this year, it launched the ChatGPT enterprise version.</p><p>Among the large open model manufacturers in China, none charges membership fees for the C-side. This is certainly to attract more traffic in the early stages of opening, but it is also because users’ habit of paying for Internet applications has yet to be developed. In the minds of most ordinary users, large models are still just a chat tool, and their willingness to pay is not strong.</p><p>Relatively speaking, B-side enterprises are still the customer group with the highest willingness to pay for innovative applications, and there is also a strong demand for intelligence. Therefore, “going to the industry” has already become a consensus on the commercialization of domestic large-scale models.</p><p>In China, large models based on vertical industries such as medical care, finance, and education have emerged one after another. Companies that have released general large models are also launching model services for industries.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/1695078722140.png"></p><p>But even so, domestic users still have a “cognitive wall” against large models. After being popular for more than half a year, big models are still “floating” in the sky. Not only have they just been opened to C-end users, but outside the industry, even many companies are still confused about it. As the product technology director of a medical and health care company in Beijing, Wang Cheng said that the company’s senior management began to discuss the changes that large models will bring to the industry a few months ago, but they have been discussing for a long time and are still waiting and watching. How to start.</p><p>Large models must be applied. What we get from customer feedback will improve the quality of data training and push the industry forward faster.</p><p>Domestic leading technology companies have been building platforms for many years and have relatively rich industrial resources. They have also connected large models to their own cloud platforms, focusing on the “large model platform + solution + AI application” model. AI capabilities empower enterprises.</p><p>Robin Li, chairman and CEO of Baidu, publicly stated not long ago that “the volume model is meaningless, and the volume application opportunities are greater.” Next, whether it is a company at the general level or an enterprise that innovates based on the general model, It will be a fierce open book on AI applications.</p><p>“The speed of future application development will be very fast. As a digital infrastructure, large models themselves are already maturing. Based on this, applications in various industries will definitely emerge in batches.” Cheng Hao said.</p><p>The battle for success starts with who can open up the market among industry customers first. “Even a small traditional industry company like ours has been treated enthusiastically by Baidu Sales. They wish they could contact us every day to communicate.” Wang Cheng joked.</p><h2 id="How-to-fight-the-application-layer-war"><a href="#How-to-fight-the-application-layer-war" class="headerlink" title="How to fight the application layer war?"></a>How to fight the application layer war?</h2><p>As the horn of the “second half” sounds, participants of all identities and backgrounds are adjusting their pace as quickly as possible, hoping to find their place in the new stage.</p><p>manufacturers that are more advanced in all aspects and qualified to go online are already gearing up.</p><p>A staff member of a general-purpose large model enterprise told “Shijie” that “after receiving the notice from the local government, our first step is to announce the opening to the whole society as quickly as possible.” According to their observation, the actions of various general-purpose large model enterprises are highly unified. After obtaining qualifications, they immediately start providing services to the society.</p><p>However, for the first batch of enterprises that announced the opening of large models, the more urgent task than attracting To C users is to open up a breakthrough in the To B field and establish their role as “water sellers”. Looking around, every company is striving to create a more complete large model product system and actively seek more partners and customer companies.</p><p>Among the first batch of enterprises that announced the opening, Baidu was the first to express its attitude at the Yunzhi Conference on September 5th. Li Yanhong publicly stated, “Baidu’s goal is to build a solid foundation for large models and support the development of AI native applications.” The so-called AI native applications are applications developed directly on the AI platform. According to Li Yanhong, it is to rely on the capabilities of large models to solve problems that could not be solved in the past or were not solved well.</p><p>In order to quickly expand its “circle of friends,” Baidu held the “Wenxin Cup” entrepreneurship competition and announced that it will invest millions of yuan in outstanding teams and provide long-term support in technology, products, development strategies, capital cooperation, and other aspects.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/1695078828131.png"></p><p>Other leading manufacturers of general large models have also made corresponding layouts in this area. For example, ByteDance’s large model service platform “Volcano Ark” provides enterprises with all-round platform services such as model fine-tuning, evaluation, and inference, and has MiniMax and other customer enterprises. SenseTime also stated that its “SenseNova” large model system and generative AI product series have covered five mainstream generative AI applications such as natural language interaction, AI document image generation, digital humans, 3D large scene reconstruction, and 3D small object generation, and will continue to upgrade in the future…</p><p>On the other hand, the other side of heavily betting on ecological construction is that enterprises need to mobilize more investment in computing power, manpower, and other aspects. As the competition for general large model ecology and war preparation competition led by leaders becomes more intense, more small and medium-sized enterprises need to find opportunities from the subdivision market.</p><p>In the view of a large model practitioner, it is difficult for small and medium-sized large model enterprises to match the funds, talents, and R&amp;D scale of Internet giants. There is no need to “struggle” on general large models anymore. “Instead of expanding the scale, it is better to find the demand point and open up the subdivision market as soon as possible.” From this perspective, many small and medium-sized enterprises have already begun to try to land and monetize. A staff member of Fourth Paradigm once told “Shijie”: “At present, the scenarios we have landed in include medical categories, etc., upgrading a large model + based on existing users. We still take a To B approach to do large models. I think the main reason is that we don’t have To C product genes originally, not that To C commercialization is more difficult, but that it is more in line with the company’s original positioning and inherent mode to do it from a To B perspective.”</p><p>“The head of another unnamed digital model enterprise revealed: ‘As early as the first half of this year, we have already implemented some enterprise-level large models in customer scenarios.</p><p>‘ This is just the beginning. In the future, as the ‘tentacles’ of general large model enterprises extend to various segmented fields, small and medium-sized enterprises may still have a chance to compete.</p><p>In addition to the supply side, many large model application layer entrepreneurs are also welcoming new opportunities and challenges.</p><p>One large model application layer entrepreneur said: ‘I always believe that there will be various intelligent models in the future, forming various intelligent service providers. Filing means that in the future, an ecosystem of intelligent suppliers that can grow compliantly on several general models will be established.’</p><p>With the gradual opening of general large models, it has become more convenient for enterprises to obtain AI capabilities, and competition at the application layer will become more intense. However, small and medium-sized enterprises in this field are currently not worried about this. Zhou Jian, CEO of Lanma Technology, said: ‘The discovery of any new land is paved by pioneers, and then more people come in.</p><p>Competition will definitely be very fierce, but no industry can form a complete monopoly. Even large companies cannot complete all internal applications of all enterprises.’</p><p>In his opinion, this will be an excellent opportunity instead. According to his observations, some enterprises in different vertical industries have a very positive attitude towards digital transformation, but sometimes they do not choose large companies as their first choice.”</p><p>For various reasons, some problems cannot be solved by universal large models, and some companies do not necessarily need to use universal large models. For example, some financial institutions, due to cautious considerations of risk control and data security, may also choose to cooperate with vertical industry model companies or develop large models on their own.</p><p>For application layer players, the key to the next round of competition lies in the “accurate capture of user needs”, that is, the ability to define products.<br>“I think that the opening of large models has little impact on most entrepreneurs. We expect that there may be a situation where large model interfaces are flooding in the future.” said Luo Yuchen, founder of large model application company Yantu Intelligence.</p><p>“The big factories have an advantage in resources. However, small factories are obviously more flexible and grounded. The real problem is how AI native applications should be done.” Luo Yuchen said to “Shijie”, as an AI native application, without large models, there is nothing. “How can it seize opportunities in the market and truly occupy a place? When will AI native applications really break out? This is also the most concerned issue in the industry.”</p><p>In Zhou Jian’s view, by early next year, China’s large models will basically be able to achieve the ability of GPT-3.5; it may take at least another year to reach the level of GPT-4. However, the industry will continue to evolve, “like GPT also has certain shortcomings, as we use large models to do applications, we can see that there are still many paths to fill in the shortcomings of different large models, and perhaps more different large models will be differentiated in the future.”</p><p>In summary, the stage of “waiting for the wind” has passed, and the east wind has arrived, and competition in various market links will be accelerated. After the strong wind blows away the floating dust, what is revealed is real gold or rubbish, which still needs to be tested by the application market.</p>]]></content>
    
    
    <summary type="html">With multiple large models announcing their public availability, the era where everyone can &quot;command&quot; AI has arrived. In just half a year, 130 large models have emerged in China, making the &quot;battle of a hundred models&quot; a reality.</summary>
    
    
    
    <category term="Technology" scheme="https://www.nablepart.com/categories/Technology/"/>
    
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="李彦宏" scheme="https://www.nablepart.com/tags/%E6%9D%8E%E5%BD%A6%E5%AE%8F/"/>
    
  </entry>
  
  <entry>
    <title>AIGC — Auto-GPT 介绍</title>
    <link href="https://www.nablepart.com/768969310164/"/>
    <id>https://www.nablepart.com/768969310164/</id>
    <published>2023-07-17T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.798Z</updated>
    
    <content type="html"><![CDATA[<h2 id="AIGC-—-Auto-GPT-介绍"><a href="#AIGC-—-Auto-GPT-介绍" class="headerlink" title="AIGC — Auto-GPT 介绍"></a>AIGC — Auto-GPT 介绍</h2><h3 id="展示自主-AI"><a href="#展示自主-AI" class="headerlink" title="展示自主 AI"></a>展示<strong>自主 AI</strong></h3><p><img src="https://cdn-images-1.medium.com/max/2000/1*NKkmD0SKFoV9QGQGnBZdTg.png"></p><p>最近推出的基于 GPT 的应用程序名为 Auto-GPT，现已在 GitHub 上可用（<a href="https://github.com/Torantulino/Auto-GPT">https://github.com/Torantulino/Auto-GPT</a>），其创新特性是自主生成“自我提示”。</p><p>Auto-GPT 的自我提示功能引起了许多人的好奇，促使他们反思在相对短的时间内人工智能技术的快速发展。</p><p>以下是 Auto-GPT 创建和部署使用 React 和 Tailwind CSS 的网站的示例：</p><p><img src="https://cdn-images-1.medium.com/max/2000/0*Xpcz2LoE6RYNwsD4"></p><h2 id="什么是-Auto-GPT"><a href="#什么是-Auto-GPT" class="headerlink" title="什么是 Auto-GPT"></a>什么是 Auto-GPT</h2><p>Auto-GPT 是一个开源的 Python 应用程序，由名为“Significant Gravitas”的开发者在 GitHub 上发布。利用 GPT-4 的功能，这个开源工具使得人工智能能够独立运行，消除了持续用户输入的需求。</p><p>Auto-GPT 是一个尖端的开源应用程序，展示了 GPT-4 语言模型的潜力。该程序由 GPT-4 提供支持，连接 LLM 的“思维”以自主完成任何给定的目标。作为完全自主的 GPT-4 操作的开创性实例，Auto-GPT 扩展了人工智能的能力边界。</p><p>在为 Auto-GPT 设定一个总体目标后，它会继续执行一步一步的操作以实现该目标。这种方法基于“人工智能代理”的概念，可以独立浏览互联网并在计算机上执行任务，消除了持续监督的需求。</p><h2 id="Auto-GPT-的特点"><a href="#Auto-GPT-的特点" class="headerlink" title="Auto-GPT 的特点"></a>Auto-GPT 的特点</h2><p>它具有以下特点：</p><ul><li><p>访问互联网进行搜索和收集信息</p></li><li><p>管理长期和短期记忆</p></li><li><p>利用 GPT-4 实例生成文本</p></li><li><p>连接到广泛使用的网站和平台</p></li><li><p>使用 GPT-3.5 进行文件存储和摘要</p></li></ul><h2 id="Auto-GPT-的要求"><a href="#Auto-GPT-的要求" class="headerlink" title="Auto-GPT 的要求"></a>Auto-GPT 的要求</h2><ul><li><p>Python 3.8+</p></li><li><p>OpenAI API 密钥</p></li><li><p>PINECONE API 密钥</p></li></ul><p>可选项：</p><ul><li>ElevenLabs 密钥（如果你想让人工智能说话）</li></ul><p>Pinecone 是人工智能的长期记忆数据库（<a href="https://www.pinecone.io/">https://www.pinecone.io/</a>）。</p><h2 id="Auto-GPT-的安装"><a href="#Auto-GPT-的安装" class="headerlink" title="Auto-GPT 的安装"></a>Auto-GPT 的安装</h2><p>假设满足所有要求，执行以下步骤：</p><h3 id="克隆-Auto-GPT-仓库"><a href="#克隆-Auto-GPT-仓库" class="headerlink" title="克隆 Auto-GPT 仓库"></a>克隆 Auto-GPT 仓库</h3><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta prompt_">$ </span><span class="language-bash">git <span class="built_in">clone</span> https://github.com/Torantulino/Auto-GPT.git</span></span><br></pre></td></tr></table></figure><h3 id="安装所需的依赖"><a href="#安装所需的依赖" class="headerlink" title="安装所需的依赖"></a>安装所需的依赖</h3><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta prompt_">$ </span><span class="language-bash">pip install -r requirements.txt</span></span><br></pre></td></tr></table></figure><h3 id="更新环境变量文件"><a href="#更新环境变量文件" class="headerlink" title="更新环境变量文件"></a>更新环境变量文件</h3><p>将 .env.template 重命名为 .env，并填写你的 OPENAI_API_KEY。如果你计划使用语音模式，请同时填写你的 ELEVEN_LABS_API_KEY。</p><ul><li><p>你可以从以下位置获取 OpenAI API 密钥: <a href="https://platform.openai.com/account/api-keys">https://platform.openai.com/account/api-keys</a>.</p></li><li><p>你可以从以下位置获取 ElevenLabs API 密钥: <a href="https://elevenlabs.io/">https://elevenlabs.io</a>。 你可以在网站的“Profile”选项卡中查看你的 xi-api-key。</p></li></ul><h3 id="运行-main-py-文件"><a href="#运行-main-py-文件" class="headerlink" title="运行 main.py 文件"></a>运行 main.py 文件</h3><p>在终端中运行 main.py Python 脚本：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta prompt_">$ </span><span class="language-bash">python scripts/main.py</span></span><br></pre></td></tr></table></figure><p>在 AUTO-GPT 的每个操作之后，输入“NEXT COMMAND”以授权它们继续执行。要退出程序，请输入“exit”并按 Enter 键。</p><h3 id="语音模式"><a href="#语音模式" class="headerlink" title="语音模式"></a>语音模式</h3><p>要使用语音模式，运行以下命令：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta prompt_">$ </span><span class="language-bash">python scripts/main.py --speak</span></span><br></pre></td></tr></table></figure><h3 id="Logs-collection"><a href="#Logs-collection" class="headerlink" title="Logs collection"></a>Logs collection</h3><p>你将在 .&#x2F;logs 文件夹中找到活动和错误日志，要输出调试日志：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta prompt_">$ </span><span class="language-bash">python scripts/main.py --debug</span></span><br></pre></td></tr></table></figure><h2 id="Caution-Continuous-Mode-⚠️"><a href="#Caution-Continuous-Mode-⚠️" class="headerlink" title="Caution! Continuous Mode ⚠️"></a>Caution! Continuous Mode ⚠️</h2><p>此模式允许 Auto-GPT 在没有用户授权的情况下运行人工智能，100% 自动化。不推荐使用连续模式。它有潜在的危险，可能导致你的人工智能永远运行或执行你通常不会授权的操作。请自行承担风险。</p><p>在终端中运行 main.py Python 脚本：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">python scripts/main.py --continuous</span><br></pre></td></tr></table></figure><p>要退出程序，请按 Ctrl + C。</p><h2 id="图像生成"><a href="#图像生成" class="headerlink" title="图像生成"></a>图像生成</h2><p>默认情况下，Auto-GPT 使用 DALL-e 进行图像生成。要使用 Stable Diffusion，需要一个 <a href="https://huggingface.co/settings/tokens">HuggingFace API Token</a> is required.</p><p>获得令牌后，在你的 .env 文件中设置以下变量：:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">IMAGE_PROVIDER=sd</span><br><span class="line">HUGGINGFACE_API_TOKEN=&quot;YOUR_HUGGINGFACE_API_TOKEN&quot;</span><br></pre></td></tr></table></figure><h2 id="限制"><a href="#限制" class="headerlink" title="限制"></a>限制</h2><p>Auto-GPT 目前处于实验阶段，旨在展示 GPT-4 的能力，尽管存在一些限制：:</p><ul><li><p>它不是一个完善的应用程序或产品，而是一个实验性项目。</p></li><li><p>在复杂的现实商业环境中的性能可能不够理想。如果你在这种情况下取得成功，请随时分享你的发现！</p></li><li><p>运营成本可能很高，因此建议与 OpenAI 确定并监控你的 API 密钥限制。</p></li></ul><h2 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h2><ul><li><p>Auto-GPT 的架构基于 GPT-4 和 GPT-3.5，通过 API 进行连接；</p></li><li><p>Auto-GPT 可以自主迭代，通过自我批判的审查、构建在先前工作基础上以及整合提示历史记录来提高输出的准确性；</p></li><li><p>Auto-GPT 具有内存管理功能，包括使用 Pinecone 数据库进行长期存储、上下文保留和基于此进行决策的改进。</p></li></ul>]]></content>
    
    
    <summary type="html">Auto-GPT 的安装过程包括克隆仓库、安装依赖和更新环境变量文件等步骤。本指南将指导您完成安装 Auto-GPT 所需的所有步骤。</summary>
    
    
    
    
    <category term="Python" scheme="https://www.nablepart.com/tags/Python/"/>
    
    <category term="AI" scheme="https://www.nablepart.com/tags/AI/"/>
    
    <category term="Auto-GPT" scheme="https://www.nablepart.com/tags/Auto-GPT/"/>
    
    <category term="安装" scheme="https://www.nablepart.com/tags/%E5%AE%89%E8%A3%85/"/>
    
    <category term="依赖" scheme="https://www.nablepart.com/tags/%E4%BE%9D%E8%B5%96/"/>
    
    <category term="GitHub" scheme="https://www.nablepart.com/tags/GitHub/"/>
    
    <category term="OpenAI" scheme="https://www.nablepart.com/tags/OpenAI/"/>
    
    <category term="PINECONE" scheme="https://www.nablepart.com/tags/PINECONE/"/>
    
    <category term="ElevenLabs" scheme="https://www.nablepart.com/tags/ElevenLabs/"/>
    
    <category term="语音模式" scheme="https://www.nablepart.com/tags/%E8%AF%AD%E9%9F%B3%E6%A8%A1%E5%BC%8F/"/>
    
    <category term="环境变量" scheme="https://www.nablepart.com/tags/%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F/"/>
    
    <category term="GPT-4" scheme="https://www.nablepart.com/tags/GPT-4/"/>
    
    <category term="GPT-3.5" scheme="https://www.nablepart.com/tags/GPT-3-5/"/>
    
    <category term="DALL-e" scheme="https://www.nablepart.com/tags/DALL-e/"/>
    
    <category term="Stable Diffusion" scheme="https://www.nablepart.com/tags/Stable-Diffusion/"/>
    
  </entry>
  
  <entry>
    <title>Hello World</title>
    <link href="https://www.nablepart.com/34b99cab2c08/"/>
    <id>https://www.nablepart.com/34b99cab2c08/</id>
    <published>2023-07-17T11:50:26.000Z</published>
    <updated>2025-08-25T09:00:39.802Z</updated>
    
    <content type="html"><![CDATA[<p>Welcome to <a href="https://hexo.io/">Hexo</a>! This is your very first post. Check <a href="https://hexo.io/docs/">documentation</a> for more info. If you get any problems when using Hexo, you can find the answer in <a href="https://hexo.io/docs/troubleshooting.html">troubleshooting</a> or you can ask me on <a href="https://github.com/hexojs/hexo/issues">GitHub</a>.</p><h2 id="Quick-Start"><a href="#Quick-Start" class="headerlink" title="Quick Start"></a>Quick Start</h2><h3 id="Create-a-new-post"><a href="#Create-a-new-post" class="headerlink" title="Create a new post"></a>Create a new post</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo new <span class="string">&quot;My New Post&quot;</span></span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/writing.html">Writing</a></p><h3 id="Run-server"><a href="#Run-server" class="headerlink" title="Run server"></a>Run server</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo server</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/server.html">Server</a></p><h3 id="Generate-static-files"><a href="#Generate-static-files" class="headerlink" title="Generate static files"></a>Generate static files</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo generate</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/generating.html">Generating</a></p><h3 id="Deploy-to-remote-sites"><a href="#Deploy-to-remote-sites" class="headerlink" title="Deploy to remote sites"></a>Deploy to remote sites</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo deploy</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/one-command-deployment.html">Deployment</a></p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;Welcome to &lt;a href=&quot;https://hexo.io/&quot;&gt;Hexo&lt;/a&gt;! This is your very first post. Check &lt;a href=&quot;https://hexo.io/docs/&quot;&gt;documentation&lt;/a&gt; for</summary>
      
    
    
    
    
  </entry>
  
  <entry>
    <title>Cryptocurrencies are in the early stages of a three-year bull market</title>
    <link href="https://www.nablepart.com/ff17059d587f/"/>
    <id>https://www.nablepart.com/ff17059d587f/</id>
    <published>2023-06-15T09:32:12.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Cryptocurrency-bull-market-coming"><a href="#Cryptocurrency-bull-market-coming" class="headerlink" title="Cryptocurrency bull market coming?"></a>Cryptocurrency bull market coming?</h2><p>Last week’s court ruling on cryptocurrency Ripple (XRP) excited the digital asset industry, but this week has been relatively quiet.</p><p><img src="https://s2.loli.net/2023/09/19/zryckKbHA2nmTVh.png"></p><p>Bitwise Asset Management is a cryptocurrency-focused firm with over $1 billion in assets under management. The company’s CEO, Hunter Horsley, who spoke about his expectations for the market in the coming years.</p><p>Will it rise further in the future?</p><p>In an interview with The Distributed Ledger, Horsley said that the crypto market could experience a bull market for at least two years.</p><p>Horsley said that the history of cryptocurrencies can be divided into several four-year cycles. He said, “Historically, Bitcoin and the crypto space will go up for three years and then go into a bear market, usually down 60 to 70 percent.”<br>Horsley said each bull market has a different catalyst. Bitcoin’s birth in 2009 triggered the first bull market in cryptocurrencies, which lasted until 2013, while ethereum’s launch in 2015 triggered a bull cycle that ended in 2018. From 2019 to 2021, different applications of blockchain drove crypto prices up, Horsley said.</p><p><img src="https://s2.loli.net/2023/09/19/dFnoZEkHS7MWyPO.png"></p><h2 id="According-to-CoinDesk-the-conditions-for-the-arrival-of-a-bull-market"><a href="#According-to-CoinDesk-the-conditions-for-the-arrival-of-a-bull-market" class="headerlink" title="According to CoinDesk, the conditions for the arrival of a bull market"></a>According to CoinDesk, the conditions for the arrival of a bull market</h2><p>In 2022, both bitcoin and ethereum fell more than 60 percent as the Federal Reserve hiked interest rates for more than a year.</p><p>This year is the first year of a new four-year cycle for cryptocurrencies, and digital assets are likely to move higher, Horsley said. If history is any guide, he said, cryptocurrencies will see a bull market in the next two to three years.<br>So far this year, the price of bitcoin has risen more than 80 percent to around $30,000, but it’s still down more than 50 percent from its 2021 all-time high, according to CoinDesk.</p><p>Horsley noted that mainstream adoption of Bitcoin and other digital assets could drive a cryptocurrency rally during this cycle. Horsley said, “Institutional mainstream counterparties, consumer use cases, and broader adoption of decentralized apps with user numbers ranging from 5 million to 10 million to 100 million could be catalysts for a bull market.”</p><p><img src="https://s2.loli.net/2023/09/19/9FRJBimnGqw6lA1.png"></p><p>To be sure, the cryptocurrency space is still relatively young, and past performance is not necessarily indicative of future results.</p><h2 id="Digital-Assets-and-U-S-Treasuries-Make-a-Great-Combination-for-Barbell-Portfolios"><a href="#Digital-Assets-and-U-S-Treasuries-Make-a-Great-Combination-for-Barbell-Portfolios" class="headerlink" title="Digital Assets and U.S. Treasuries Make a Great Combination for Barbell Portfolios"></a>Digital Assets and U.S. Treasuries Make a Great Combination for Barbell Portfolios</h2><p>Horsley said the sector could also benefit greatly from any progress in the transparency of cryptocurrency regulation in the United States. “The biggest barrier to cryptocurrency maturity and adoption over the last four or five years has been regulatory transparency.” As U.S. regulators tighten their grip on the industry, “it means that some projects and assets are going to be on the wrong side of the fence. You’re going to pay for that. But the industry as a whole will benefit.”<br>Crypto in the portfolio.</p><p><img src="https://s2.loli.net/2023/09/19/fhsgoxP6NIupSkH.png"></p><p>Ben Weiss, CEO of coinlip, said cryptocurrencies could be a good option for portfolios that employ a barbell strategy in the current market environment.<br>The barbell strategy refers to the concept of investing in two baskets of assets, one of which is very safe while the other holds speculative but potentially highly rewarding assets.</p><p>Weiss noted that while cryptocurrencies are volatile, Bitcoin’s returns this year and over the past 10 years have been much higher than most other major assets. This makes digital assets and U.S. Treasuries a great combination for barbell portfolios, Weiss said. Currently, digital assets and U.S. Treasuries are considered very safe, yielding more than 5 percent.</p><h2 id="Instant-crypto"><a href="#Instant-crypto" class="headerlink" title="Instant crypto"></a>Instant crypto</h2><p>Bitcoin is down 2.9 percent over the past seven days, trading at about $29,728 on Thursday, according to CoinDesk. Over the same period, ethereum fell 1.7 percent to about $1,885.</p>]]></content>
    
    
    <summary type="html">This article from professional asset management firms as well as analysts analyzes the characteristics and predictions of a cryptocurrency bull market, which may be imminent.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="crypto" scheme="https://www.nablepart.com/tags/crypto/"/>
    
    <category term="xrp" scheme="https://www.nablepart.com/tags/xrp/"/>
    
    <category term="btc" scheme="https://www.nablepart.com/tags/btc/"/>
    
    <category term="eth" scheme="https://www.nablepart.com/tags/eth/"/>
    
    <category term="bull market" scheme="https://www.nablepart.com/tags/bull-market/"/>
    
  </entry>
  
  <entry>
    <title>DOGE Coin: A Disruptive Journey into Digital Currency</title>
    <link href="https://www.nablepart.com/f0270808b7ed/"/>
    <id>https://www.nablepart.com/f0270808b7ed/</id>
    <published>2023-05-23T10:30:17.000Z</published>
    <updated>2025-08-25T09:00:39.790Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Understanding-the-Origin-Principles-and-Potential-of-Dog-Coin"><a href="#Understanding-the-Origin-Principles-and-Potential-of-Dog-Coin" class="headerlink" title="Understanding the Origin, Principles and Potential of Dog Coin"></a>Understanding the Origin, Principles and Potential of Dog Coin</h2><p>With the advancement of technology and the popularization of the Internet, the concept of digital currency is gradually coming into people’s view. Among them, dogcoin, with its unique features and strong community power, has been warmly welcomed worldwide. In this article, we will introduce in detail the origin, basic principles, potential and its future development of the dog coin.</p><h2 id="the-origin-of-the-dog-coin"><a href="#the-origin-of-the-dog-coin" class="headerlink" title="the origin of the dog coin"></a>the origin of the dog coin</h2><p>Dogcoin, officially known as Dogecoin, was introduced in December 2013. Co-founded by Billy Markus and Jackson Palmer, two software engineers, Dogcoin was originally intended as a joke to satirize the crazy hype in the cryptocurrency market. However, Dogcoin became popular with a large number of users within a short time of its release and became a widely circulated digital currency.</p><h2 id="the-operation-principle-of-dogcoin"><a href="#the-operation-principle-of-dogcoin" class="headerlink" title="the operation principle of dogcoin"></a>the operation principle of dogcoin</h2><p>The operation of dogcoin is based on open-source blockchain technology, using a cryptographic algorithm called Scrypt, which is different from Bitcoin’s SHA-256 algorithm.The main feature of the Scrypt algorithm is that it is computationally large but memory-intensive, which makes the process of mining dogcoin relatively easy, and therefore accelerates the generation and circulation of dogcoin.</p><p><img src="https://s2.loli.net/2023/09/19/TO9x8ymIYUS16Q4.jpg"></p><p>Dogcoin transactions are recorded very quickly, with block confirmations occurring approximately every 600 seconds, also known as mining. In order to prevent malicious attacks, Dogcoin network also uses the Proof-of-Work mechanism to ensure the security and validity of all transactions.</p><h2 id="Dogcoin’s-Potential-and-Future-Development"><a href="#Dogcoin’s-Potential-and-Future-Development" class="headerlink" title="Dogcoin’s Potential and Future Development"></a>Dogcoin’s Potential and Future Development</h2><p>Although the initial purpose of Dogcoin was to satirize the cryptocurrency market, it has shown strong potential in actual operation. First, the transaction speed and low fees of Dogcoin provide users with a convenient and secure digital currency trading experience. Second, the mining process of dogcoin is relatively easy, attracting a large number of miners and fans. In addition, the dog coin community is strong and has many supporters, which also provides a solid guarantee for the future development of dog coin.</p><p><img src="https://s2.loli.net/2023/09/19/dq6TyEPJniBKR4b.png"></p><p>However, like all emerging things, Dogcoin faces some challenges. The biggest of these is its deflationary problem like Bitcoin’s. Due to the limited total amount of dogcoins and the increased difficulty of mining them, the supply of dogcoins is gradually decreasing, leading to large price fluctuations. In order to solve this problem, the dogcoin community has proposed some solutions, such as adjusting the mining difficulty and increasing the issuance, but it still needs to be further explored and practiced.</p><p>In addition, with the growing maturity of the cryptocurrency market and the strengthening of regulatory policies, Dogcoin also needs more technological innovation and compliance operations to adapt to market changes. For example, Dogcoin can explore how to better integrate with real-world payment systems to improve its usefulness in daily transactions.</p><h2 id="Conclusion"><a href="#Conclusion" class="headerlink" title="Conclusion"></a>Conclusion</h2><p>Although the origin of Dogcoin was full of chance and jokes, it has shown strong potential and value in actual operation. As a fast, secure and low-cost digital currency, dogcoin has won the favor of a large number of users. In the future, as technology continues to advance and the market environment continues to change, dogcoin is expected to play a greater role in the digital currency market.</p><p><img src="https://s2.loli.net/2023/09/19/FNxDHEogpK5qfRM.png"></p><p>However, we should also see that as a new thing, dogcoin still faces many challenges and problems. How to solve these problems, so that the dog coin better adapted to the market environment and development and growth, still need us to continue to explore and practice. Overall, dogcoin is a digital currency full of potential and possibilities, worthy of our continued attention and expectations.</p>]]></content>
    
    
    <summary type="html">This article introduces the principle and development history of the doge coin, followed by a bold prediction and summary of the future development and potential of the doge coin.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="crypto" scheme="https://www.nablepart.com/tags/crypto/"/>
    
    <category term="doge coin" scheme="https://www.nablepart.com/tags/doge-coin/"/>
    
    <category term="$doge" scheme="https://www.nablepart.com/tags/doge/"/>
    
  </entry>
  
  <entry>
    <title>MetaTown：A Platform for Building Digital Assets by Yourself</title>
    <link href="https://www.nablepart.com/fe70cbe428f6/"/>
    <id>https://www.nablepart.com/fe70cbe428f6/</id>
    <published>2022-10-01T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.794Z</updated>
    
    <content type="html"><![CDATA[<p>Huawei Cloud Solution as Code relaunched the solution “Building a Digital Asset Platform Based on MetaTown”, which is supported by Huawei Cloud Digital Asset Chain with the underlying blockchain technology, realizing the full lifecycle management of casting, issuance, circulation, and confirmation of rights of digital assets. At the same time, a one-click deployment solution and a rapid experience version with open source code are introduced to help SMEs quickly build their own digital asset platform.</p><p><img src="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292026663.webp"></p><h2 id="I-NFT-Market-Prospect"><a href="#I-NFT-Market-Prospect" class="headerlink" title="I. NFT Market Prospect"></a>I. NFT Market Prospect</h2><p>At present, in China, NFT is the first large-scale landing in digital collections and digital marketing scenarios, etc. 2021 is known as the first year of “digital collections”, which created a total transaction volume of more than 8.16 million and a total transaction value of more than 6.5 billion U.S. dollars. As of May 2022, there were more than 450 domestic digital collectibles platforms, and the top 50 platforms sold 5 million copies of collectibles, with sales reaching $200 million. Forecast 2023 annual digital collections domestic cloud market space reached 300 million dollars.</p><h3 id="II-NFT-Scenario-Pain-Points-Requirements"><a href="#II-NFT-Scenario-Pain-Points-Requirements" class="headerlink" title="II. NFT Scenario Pain Points Requirements"></a>II. NFT Scenario Pain Points Requirements</h3><h3 id="1-Uniqueness"><a href="#1-Uniqueness" class="headerlink" title="1. Uniqueness"></a>1. Uniqueness</h3><p>NFT needs to utilize blockchain technology to give it uniqueness and non-tamperability. At the same time, it also needs to utilize the digital asset chain platform to realize digital asset authentication, credible preservation and safe transaction. In the domestic NFT mostly use the alliance chain, whether the piggyback blockchain is independently developed, whether the consensus mechanism and verification node have the authority and credibility has become the evaluation index of China’s digital asset trading platform.</p><h3 id="2-Real-Name-Authentication"><a href="#2-Real-Name-Authentication" class="headerlink" title="2. Real Name Authentication"></a>2. Real Name Authentication</h3><p>Blockchain information service providers shall, in accordance with the provisions of the Network Security Law of the People’s Republic of China, authenticate the real identity information of the users of blockchain information services based on the organization code, identity document number or mobile phone number. If the user does not authenticate the real identity information, the blockchain information service provider shall not provide the relevant services to the user.</p><h3 id="3-High-Concurrency-and-Elasticity-Business-Scenarios"><a href="#3-High-Concurrency-and-Elasticity-Business-Scenarios" class="headerlink" title="3. High Concurrency and Elasticity Business Scenarios"></a>3. High Concurrency and Elasticity Business Scenarios</h3><p>In the digital asset spike scenario, due to the influx of a large number of players and high concurrency of business, there are often cases of application inaccessibility, inability to pay and other crashes. In addition, most cases of digital assets are on sale for a limited time, there will be obvious surge phenomenon, how to get through the business peak without wasting resources has become an urgent need for customers.</p><h3 id="4-Image-and-Multimedia-Storage"><a href="#4-Image-and-Multimedia-Storage" class="headerlink" title="4. Image and Multimedia Storage"></a>4. Image and Multimedia Storage</h3><p>Digital assets are rich in variety and are mainly in the form of digital pictures, 3D models, videos, etc., which require a large amount of storage space for mass distribution and need to support accelerated access.</p><h2 id="III-Based-on-the-MetaTown-Digital-Asset-Platform"><a href="#III-Based-on-the-MetaTown-Digital-Asset-Platform" class="headerlink" title="III. Based on the MetaTown Digital Asset Platform"></a>III. Based on the MetaTown Digital Asset Platform</h2><p>This solution is based on the Huawei Cloud open source project MetaTown, which can help you quickly build your own digital asset management platform on Huawei Cloud. <a href="https://gitee.com/HuaweiCloudDeveloper/huaweicloud-solution-build-a-digital-assets-platform-based-on-meta-town">MetaTown</a> is a digital asset management platform that utilizes the Huawei Cloud Digital The <a href="">MetaTown</a> is a digital asset management platform built using the underlying blockchain technology provided by Huawei Cloud Digital Asset Chain DAC, which can realize the full lifecycle management of casting, issuing, circulating, and confirming the rights of digital assets.</p><p>The platform can realize the whole life cycle management of digital assets, such as casting, issuing, circulation, and rights confirmation. <a href="https://cdn.jsdelivr.net/gh/youngjuning/images@main/202310292029896.webp"></a></p><p>Building a digital asset platform based on MetaTown</p><h3 id="Capability-One-Digital-Asset-Chain-DAC"><a href="#Capability-One-Digital-Asset-Chain-DAC" class="headerlink" title="Capability One: Digital Asset Chain DAC"></a>Capability One: Digital Asset Chain DAC</h3><p>Adopting NFT technology, Digital Asset Chain (DAC) is Huawei Cloud’s self-developed digital asset chain platform, which is based on Huawei Cloud’s blockchain engine, and through technological innovations such as timestamps and smart contracts, it can realize confirmed rights, trustworthy preservation, and safe transactions of digital assets. It supports the copyright protection of original digital products in various forms, and obtains digital copyright unique identifiers assigned by the China Copyright Protection Center. The process of copyright transaction flow is clearly traceable and credible. At the same time, it provides the ability of infringement monitoring, evidence fixing and copyright appraisal to quickly resolve copyright disputes.</p><h3 id="Capability-II-Human-Identification"><a href="#Capability-II-Human-Identification" class="headerlink" title="Capability II: Human Identification"></a>Capability II: Human Identification</h3><p>The ID verification service IVS uses ID card information and face pictures to compare with authoritative databases, thus realizing identity verification. The service can help customers quickly verify the identity of blockchain information service users and meet the requirements of laws and regulations.</p><h3 id="Capability-III-Distributed-Caching"><a href="#Capability-III-Distributed-Caching" class="headerlink" title="Capability III: Distributed Caching"></a>Capability III: Distributed Caching</h3><p>By integrating Redis, a distributed caching service, we support putting the data of collections into the cache to speed up the client’s access speed and improve the purchasing experience.</p><h3 id="Capability-4-Digital-Asset-Storage"><a href="#Capability-4-Digital-Asset-Storage" class="headerlink" title="Capability 4: Digital Asset Storage"></a>Capability 4: Digital Asset Storage</h3><p>Digital asset chain provides rich asset storage management interface, supporting one-click storage of rich media such as pictures, videos, audios, 3D models, texts, etc., which is safe, highly reliable, and rich in types. nft data is stored to object storage service OBS, and directly downloaded from OBS, providing single-barrel EB-level storage capacity to meet the demands of nft data storage, with ultra-high performance, supporting 10 million TPS. 2.4Gb&#x2F;s single-stream upload speed.</p><p>Based on this, MetaTown digital asset platform has three major advantages:</p><p>(1) <strong>Ultra-high performance.</strong> Huawei Cloud Digital Asset Chain DAC supports 50,000 times&#x2F;second concurrent on-chain digital asset creation, and supports 10 billion digital asset issuance flow.</p><p>(2) <strong>Supports one-click deployment.</strong> Users can deploy various cloud service resources based on official sample templates with one click to quickly build the MetaTown digital asset platform.</p><p>(3) <strong>Open source and customization.</strong> The solution is open source and can be used for commercial purposes for free, and can be customized on the basis of the source code to meet the complexity and variety of NFT business scenarios.</p><h2 id="Fourth-MetaTown-program-application-scenarios"><a href="#Fourth-MetaTown-program-application-scenarios" class="headerlink" title="Fourth, MetaTown program application scenarios"></a>Fourth, MetaTown program application scenarios</h2><p>Based on MetaTown, the digital asset platform solution covers multiple industries and meets the needs of various digital asset scenarios:</p><h3 id="Scenario-1-Cultural-and-tourism-industry"><a href="#Scenario-1-Cultural-and-tourism-industry" class="headerlink" title="Scenario 1: Cultural and tourism industry"></a>Scenario 1: Cultural and tourism industry</h3><p>Digital collection refers to the use of blockchain technology, corresponding to the unique digital credentials generated by a specific work, artwork, on the basis of the protection of its digital copyright, to achieve real and credible digital issuance, purchase, collection and use. The cultural relics collections and artworks are given additional value to the digital collections through physical digital mapping, metadata uploading and other technical means for consumers to collect, which greatly improves the liquidity of the artworks.</p><h3 id="Scenario-2-Digital-marketing"><a href="#Scenario-2-Digital-marketing" class="headerlink" title="Scenario 2: Digital marketing"></a>Scenario 2: Digital marketing</h3><p>Brand retail NFT digital marketing, obtaining private domain traffic, directly forming sales conversion, and accumulating customer data are the main goals of brand digital marketing. Digital collection as a brand concept carrier, brand value and digital goods combination, that is, “brand value + digital bandwagon”. Not only carries the brand content, but also can be combined with physical products. Digital collectibles are also the medium of brand communication, through the rights and benefits, badges, tickets, gift boxes and other forms of display, linking the user interaction scene.</p><h3 id="Scene-3-Games"><a href="#Scene-3-Games" class="headerlink" title="Scene 3: Games"></a>Scene 3: Games</h3><p>Digital assetization of game props, assets or IP peripherals, digital assets can be circulated and realized, thus expanding user volume and increasing user stickiness. Game IP incubation, game scene innovation is currently a huge challenge for game manufacturers, especially under the limitations of the game version number, which reduces manufacturers’ opportunities for trial and error, and requires a higher level of game production. Predicting users’ favorites in advance through digital asset issuance and circulation, etc. helps innovation decision-making. Translated with <a href="http://www.deepl.com/Translator">www.DeepL.com/Translator</a> (free version)</p>]]></content>
    
    
    <summary type="html">Huawei Cloud Solution as Code has launched a groundbreaking solution called Building a Digital Asset Platform based on MetaTown. Supported by Huawei Cloud&#39;s digital asset chain, this solution enables the end-to-end management of digital assets, including casting, issuance, circulation, and rights confirmation, using underlying blockchain technology. Additionally, a one-click deployment solution and a quick experience version have been released, along with open-source code, to assist small and medium-sized enterprises in rapidly building their own digital asset platforms.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="MetaTown" scheme="https://www.nablepart.com/tags/MetaTown/"/>
    
    <category term="Huawei" scheme="https://www.nablepart.com/tags/Huawei/"/>
    
    <category term="DAC" scheme="https://www.nablepart.com/tags/DAC/"/>
    
  </entry>
  
  <entry>
    <title>Without hardcore technology, how can we seize the trend of the metaverse?</title>
    <link href="https://www.nablepart.com/5648d561bce1/"/>
    <id>https://www.nablepart.com/5648d561bce1/</id>
    <published>2021-11-01T12:00:00.000Z</published>
    <updated>2025-08-25T09:00:39.806Z</updated>
    
    <content type="html"><![CDATA[<p>Since 2021, Web 3.0 and meta-universe have gradually become popular concepts in the global technology industry; Web 3.0 is the future of technology development direction, and meta-universe is the future of application scenarios and lifestyles, and the two are complementary to each other, and the relationship between them is dependent on each other. Technology giants such as Meta, Google, Apple, Huawei, Tencent, OPPO and so on have been actively laying out related industries.</p><p>However, we have to face the fact that technical limitations are becoming the biggest bottleneck in the development of Metaverse, and the development of Web 3.0, spatial computing, super computing power, instant messaging, and other corresponding underlying technologies will determine whether Metaverse is an illusory bubble or a real future.</p><p>The meta-universe is realized by using Web 3.0 technology, which includes blockchain, AI, big data and other technological innovations and DAO (Decentralized Autonomous Organization) network organization model innovations.Web 3.0 provides decentralized and data privacy technology to support the virtual economy and asset transactions in the meta-universe. Web 3.0 provides decentralized and data-privacy technologies to support virtual economy and asset transactions in the metaverse, and the virtual economy of the metaverse can also be combined with cryptocurrency and blockchain technologies in Web 3.0 to achieve decentralized currency exchange and asset management.</p><p>The purpose of meta-universe itself is to create a realistic virtual world to replace some activities in the real world, but if we want to realize the seamless connection between the digital world and the real world, so that the two worlds can perceive and understand each other, we rely on spatial computing technology. 3D engine, VR&#x2F;AR&#x2F;MR, voice and gesture recognition, spatial mapping, digital twins and other technologies belong to the spatial computing layer. In particular, 3D engine technology is indispensable for smart cities, building virtual spaces, industrial design and highly realistic immersive user experiences.</p><p>In addition, infrastructure and supporting services such as edge computing, instant messaging, patent pools, etc. are also becoming the focus of attention in the industry.</p><p>When we talk about the meta-universe, it’s even more important to talk about the hardcore technology behind it. on May 27-28, GOTC 2023 will be held in Shanghai. Among them, “Web 3.0 Meta-Universe World” thematic forum will lead the audience to clear the fog, look forward to the ultimate form of the meta-universe, and dismantle the meta-universe from the technical perspective of Web 3.0, spatial computation, 3D engine, instant messaging, etc. !</p><p><img src="https://oscimg.oschina.net/oscnet/up-30f4cdaa3c61b40b51d699600e5d68f7aaa.jpg"></p><p>Global Open-source Technology Conference (GOTC), short for GOTC, is a grand open-source technology feast for developers around the world, jointly sponsored by Open Atom Open Source Foundation, Linux Foundation Asia Pacific, Shanghai Pudong Software Park and Open Source China. From May 27th to 28th, GOTC 2023 will be a 2-day open source industry event in Shanghai. GOTC 2023 will take the form of industry exhibition, keynote speeches, thematic forums and open source bazaar. Attendees will discuss hot technology topics such as meta-universe, 3D and gaming, eBPF, Web3.0, blockchain and other popular topics such as open source community, AIGC, automotive software, AI programming, open source education and training, cloud native, etc., discussing the future of open source and contributing to the development of open source.</p><p>GOTC 2023 <strong>Registration channel is now open,</strong> We sincerely invite global open source enthusiasts in various technical fields to join us!</p>]]></content>
    
    
    <summary type="html">Since 2021, Web 3.0 and the Metaverse have gradually become popular concepts in the global technology industry. Web 3.0 is the future direction of technological development, while the Metaverse is the future of application scenarios and lifestyles. The two complement each other and have a symbiotic relationship. Tech giants such as Meta, Google, Apple, Huawei, Tencent, OPPO, etc. are actively laying out related industries.</summary>
    
    
    
    <category term="Blockchain" scheme="https://www.nablepart.com/categories/Blockchain/"/>
    
    
    <category term="Metaverse" scheme="https://www.nablepart.com/tags/Metaverse/"/>
    
  </entry>
  
</feed>
